Dynatrace exists to make software work perfectly. Our platform combines broad and deep observability and continuous runtime application security with advanced AIOps to provide answers and intelligent automation from data. This enables innovators to modernize and automate cloud operations, deliver software faster and more securely, and ensure flawless digital experiences.
Job DescriptionAt Dynatrace, Information Systems Engineering manages and transforms data into information for decisionmakers. This includes assessment, design, acquisition and/or implementation of tools, stores and pipelines for turning data into information.
We are seeking a Lead Data Engineer who will provide key technical direction for and hands-on effort with a small team of data engineers supporting our Business Intelligence function.
A core role will be directing and helping to implement transformative pipelines of business data into our Snowflake environment. The ideal candidate will have experience and demonstrable skill with Snowflake, and Snowpark and Spark using Python. We are interested in candidates who can demonstrate technical leadership of at least small teams of data engineers, including mentoring and upskilling more junior members of the team.
Key responsibilities:
Lead the design, implementation, and maintenance of scalable data pipelines in the Snowflake eco-system including third party vendor tools such as AWS, Fivetran, etc.Key contributor to a Data Engineering strategy to ensure efficient data management for operations and enterprise analyticsKey technical expert for business stakeholder engagement on business data initiativesCollaboration with colleagues in Data Modeling, BI and Data Governance teams for platform initiativesProvide the technical interface to data engineering vendorsEnsure data engineering standards align with industry best practices for data governance, data quality, and data securityEvaluate and recommend new data technologies and tools to improve data engineering processes and outcomesQualificationsQualifications:
Significant experience in a hands-on data engineering role, especially in relation to business operations dataBachelor’s degree in Computer Science, Information Systems or related field, or equivalent experienceExperience managing stakeholder engagement, collaborating across teams, and working on multiple simultaneous projectsHands-on experience implementing robust, scalable data pipelinesExtensive experience acquiring data from REST APIsStrong background in Python/Spark programming, with the ability to write efficient, maintainable, and scalable data pipeline codeSolid understanding of data warehousing, data lakes, MPP data platforms, and data processing frameworksStrong understanding of database technologies, including SQL and NoSQL databases.Experience with CI/CD pipelines and DevOps practices for data engineeringExcellent problem-solving and analytical skills.Snowflake certification or other relevant data engineering certification is a plusAdditional InformationA one-product software company creating real value for the largest enterprises and millions of end customers globally, striving for a world where software works perfectly.
Working with the latest technologies and at the forefront of innovation in tech on scale; but also, in other areas like marketing, design, or research.
A team that thinks outside the box welcomes unconventional ideas, and pushes boundaries.
An environment that fosters innovation, enables creative collaboration, and allows you to grow.
A globally unique and tailor-made career development program recognizing your potential, promoting your strengths, and supporting you in achieving your career goals.
A truly international mindset with Dynatracers from different countries & cultures all over the world, and English as the corporate language that connects us all
A culture that is being shaped by the diverse personalities, expertise, and backgrounds of our global team.