About FactSet:
FactSet is a leader in providing research and analytical tools to finance professionals and offers instant access to accurate financial data and analytics around the world. FactSet clients combine hundreds of databases from industry-leading suppliers into a single powerful information system.
About Enterprise Data & Insights:
The Enterprise Data & Insights Engineering group promotes informed decision-making driven by data across our organization. This group is on mission to create Enterprise data lakes, Develop & Maintain a connected Enterprise Data Models, Build Standard Reporting Layers and follow a stringent governance process to enable Research and Development of new insights for our C-suite Executives.
Data quality, timeliness and lineage lies at the heart of our vision. Our team leverages the latest cloud technologies, and We are strong contingent of FactSet Veterans who have worked across varied product lines of FactSet.
Ultimately, our vision is to enable data-driven decision-making at all levels, empowering individuals, and nurturing a culture that relies on accurate, reliable, and accessible enterprise data.
Key Responsibilities
Data Observability and Revenue Protection through Product Usage Monitoring:
The Lead Software Engineer will establish a comprehensive observability framework to monitor usage patterns across critical FactSet products, such as FactSet Mercury, Portfolio Analytics Services, and FactSet Content APIs. By setting up structured monitoring processes and periodic checks, the engineer will identify trends and detect deviations from contracted usage levels, ensuring alignment with contracts and supporting revenue protection.
Automated Data Validations for Accurate Usage Tracking:
To safeguard revenue, the Lead Engineer will utilize automated data validation frameworks, such as DBT Core and Great Expectations, to enforce data quality standards across usage metrics. Accurate validations prevent discrepancies that could result in underbilling or billing inaccuracies, ensuring that data reliability is maintained as a foundation for usage-based billing.
Enforcement of Consumption Contracts and Quota Management to Prevent Revenue Leakage:
The Lead Engineer will implement regular monitoring processes to detect unusual usage patterns or unexpected consumption spikes that could lead to revenue leakage or contract overuse. Enforce consumption contracts to ensure usage remains within agreed-upon quotas. By tracking usage against contract terms, the engineer helps prevent unmonitored over-consumption, supporting revenue protection by maintaining transparency and compliance with usage quotas.
Integration of Usage Data with Billing Systems and APIs:
Integrating usage data with billing systems and APIs is crucial for accurate, revenue-aligned invoicing. The Lead Engineer will design and implement RESTful APIs to allow billing platforms to access periodic consumption data, ensuring billing accuracy and alignment with actual usage.
Self-Service Tools for Usage Management to Support Revenue Protection:
The Lead Engineer will develop self-service tools using Streamlit to enable Sales and Implementation teams to monitor client usage, review historical data, and set alerts independently. These tools help teams manage consumption within contractual limits, preventing overuse and supporting FactSet’s revenue goals by empowering users to manage usage proactively.
Technology Skill Sets Required:
Data Observability & Monitoring Tools: Experience with DBT Core, Great Expectations, or other OSS observability and data validation frameworks to ensure data quality.Spark and Delta Lake on Databricks: Proficiency in building and managing data pipelines on Databricks using Spark and Delta Lake, enabling scalable data processing and efficient storage for monitoring and usage tracking.Data Engineering & ETL: Strong expertise in designing data pipelines, implementing ETL processes, and enforcing data quality standards, specifically on Databricks.DevOps on Databricks: Experience in deploying and automating Databricks workflows, managing cluster configurations, and optimizing resource usage for efficient data processing.Python & SQL: Proficiency in Python for scripting, automation, and API development, and SQL for data querying and manipulation.API Development: Strong skills in developing REST APIs for seamless integration with billing and monitoring systems, aligning consumption data with revenue needs.UI Development: Experience with Streamlit for building user-friendly, self-service interfaces to facilitate usage tracking and management.Cloud & DevOps Practices: Proficiency in Docker, AWS, and Heroku for containerization, deployment, and DevOps practices across cloud environments.