New York, NY, USA
6 days ago
Data Pipeline - Product Manager

Ready to shape the future of product delivery while crafting solutions that enhance and optimize customer experiences?

As a Product Manager, within the Data Pipeline Team, you will enhance and optimize the way products are delivered to customers and create solutions and efficiencies that enable successful implementations in an expedient and organized way.

Job Responsibilities:

Define roadmap and deliver technology features with a strong focus on scalability, resiliency, and stability, at the same time, own, maintain, and develop a product backlog that supports the overall strategic roadmap and value proposition. Lead delivery of features and enhancements while ensuring adherence to the firm’s risk, controls, compliance, and regulatory requirements. Manage timelines and dependencies while monitoring blockers, ensuring adequate resourcing, and liaising with stakeholders and functional partners. Engage with customers to gather feedback and incorporate it into our product vision, while effectively evangelizing DPL’s offerings. Develop a product strategy and vision that delivers value to customers and aligns with architectural and product strategy and partner with technical teams to draft Agile epics and stories with appropriate detail and clear acceptance criteria. Monitor and analyze product performance and stability to drive improvements and identify new opportunities. Communicate clearly with customers visually and in writing about the product, including formal release announcements, status updates, product marketing pages, and decks.

Required qualifications, capabilities, and skills:

5+ years of experience with ETL processes, including the design, development, and management of ETL workflows. 2+ years of experience in the data streaming or big data domain with hands-on experience in Confluent, Flink, and Kafka. Strong technical track record working closely with architecture, engineering teams, and partners (Risk, Compliance, Production Support, etc.) from design through deployment. Strong understanding of data architecture principles and best practices, and familiarity with data lake, data lakehouse, data movement, and data migration. Solid understanding of cloud architectures and experience with cloud-based ETL services such as AWS Glue, S3, EMR, Lambda. Knowledgeable in the requirements to ensure data quality, governance, and security throughout an enterprise-scale organization.

Preferred qualifications, capabilities, and skills:

In-depth knowledge and practical experience with AWS services pertinent to data engineering and transformation, such as AWS Glue, S3, EMR, Lambda, and others. Experience implementing metadata management solutions to support data lineage, cataloging, and governance efforts. Skilled in defining and enforcing data standards to maintain consistency and interoperability across various data systems.

 

 

Confirm your E-mail: Send Email