Position Overview
Outset is seeking a Principal Software Engineer to join our Cloud Infrastructure & Platform team! As an experienced cloud software engineer and a member of the Cloud Infrastructure & Platform team, you will lead our technical vision and the transformation of our architecture to a unified application and data platform. You will build and own mission-critical foundation systems that ingest, process, and analyze the health data coming from our Tablo devices as part of a growing network of Internet of Health Things (IoT/IoHT). As a Principal Engineer, you will work at all levels of the architecture and develop microservices that underpin Outset’s applications ecosystem which serves internal customers, external customers, and patients. You will lead efforts for a platform that spans multiple AWS regions and VPCs, is used in a plethora of software portals, and powers data analytics, machine learning, and business intelligence solutions. This is a great opportunity to develop mission critical infrastructure and platform services for a fast-growing medical device company.
Candidate attributes and abilities:
You possess a very strong technical background and have a high degree of appreciation for distributed data-intensive backend systems design and an uncompromising attitude toward quality and ownership. You want to know everything about our current systems and find how they might be improved with users and production in mind. You know well when to develop a solution with core language features and when to leverage a managed service or open-source tools. You are very opinionated about technology because you’ve been there and done it - yet equally collaborative and open minded. You can evaluate a wide range of technologies, recommending solutions, and leading efforts to implement and deliver.Our systems are built using a variety of tech stacks, including Core Java, Core Scala, Java (Spring Boot), Scala (Play), Python, Go, Typescript, Node.js, Docker, and various AWS technologies, Kafka and Snowflake. Some of the systems built and owned end-to-end by our team:
Distributed messaging cluster (Kafka) for streaming IoT and real-time messaging. Data lake, warehouse and databases across S3, Postgres RDS, DynamoDB and Snowflake. Streaming IoT big data, real-time and batch applications, ELT/ETL, data pipelines. Unified platform APIs and services. Observability: monitoring and alerting. Access and data governance controls and maintenance of PHI as well as non-PHI data. Security and access controls across several AWS commercial and GovCloud accounts. DevSecOps: provisioning, configuration, securing, and CI/CDEssential Job Functions and Responsibilities
Help build a scalable, reliable, operable and performant unified application and data platform for Outset’s application developers, data scientists/engineers, etc. Design new software systems and enhancements to existing systems to support substantial new software features and products. Develop SQL and NoSQL, solve big data and complex data problems. Develop batch, real-time and streaming data solutions, data-intensive platform APIs and services. Develop performant and robust multi-threaded and event-driven solutions. Develop TabloCloud SDK for IoT devices for cloud interaction and data transmission. Identify limitations and required features in platform APIs and data tools and partner with peer teams to design and implement them. Collaborate with peer teams to help streamline their POCs and MVPs into production grade systems. Deal with technical debt and help refactor our legacy code base, untangling the monolith (Hibernate/ORM), while driving coding standards and quality. Help improve our logging to enhance our alerting and debugging of production issues and participate in our on-call support rotation internally and on PagerDuty. Help establish and improve measurable metrics for platform’s success and service objectives. Drive efficiency and reliability improvements through design and DevOps automation: performance, scaling, observability, and monitoring. Own the cloud technical roadmap, design and review for end-to-end solutions, ensure design quality and integrity. Lead and mentor junior engineers and drive a culture of merit and technical perfection. When tackling authentication/authorization and sensitive data problems, be mindful of security, least privilege access, PII/PHI and data reliability concerns. Design it, build it, ship it, operate it – own it!Required Qualifications
Master’s degree in computer science, or a similar field, or an equivalent combination of education (bachelor’s degree) and related work experience. A minimum of 15+ years of professional experience in software development with hands on coding experience covering full stack and big data. Advanced English Level Strong programming skills in one or more of: Java, Python, Scala, Typescript/JavaScript, Go and SQL. Experience with a cloud platform (we use AWS commercial and GovCloud). Strong foundation in pragmatical computer science, with strong competencies in common data structures, algorithms, OOP, Functional Programming and software design and patterns. You need to be able to engineer a solution and defend it (We are seeking a strong, results-driven engineer who can deliver, not an academic scientist.) Experience with developing and owning large-scale distributed systems and services. Strong problem solving and debugging skills. Experience in designing and developing RESTful APIs, data persistence APIs. Experience with a variety of backend & database technologies and with making architectural trade-offs. Experience with cybersecurity principles and practices, including securing distributed systems, sensitive data handling (e.g., PII/PHI), and infrastructure hardening. Desire to be directly responsible for the lifecycle of engineering solutions. This includes leading the design and implementation of projects and organizing the team to achieve a remarkable solution. Willing to pick up any languages, technologies, or methodologies necessary - and if a conventional solution does not exist, then roll up the sleeves and innovate, as necessary. Familiarity with one or more of: DynamoDB, S3, Kafka/Zookeeper, Kinesis, Postgres, Snowflake, Athena, MQTT, RabbitMQ, GraphQL, Avro, Protocol Buffers, Thrift, gRPC, nginx, AWS VPC, ALB, CloudWatch, CloudTrail, SQS, SNS, Cognito, Inspector, Lambda, Fargate, OWASP, STIG, Spark, Flink, SageMaker, MLFlow, TensorFlow, Scikit, etc.Desired Qualifications:
Willingness to travel occasionally, based on business needs; having a valid visa to travel to the USA is a plus. Experience with HIPAA compliance, data privacy regulations, and medical software development is a plus. Experience building and owning highly scalable APIs, data-intensive microservices, domain modeling, reactive services with ES/CQRS. Experience with large-scale distributed storage and database systems (RDBMS or NoSQL) Experience with distributed messaging systems. Deep understanding of big data architectures, timeseries databases and hands on building pipelines/frameworks using core language features as well as AWS managed services and open-source turn-key solutions. Experience with building real-time messaging solutions with core language features as well as AWS managed services and open-source turn-key solutions. Familiarity with microservice architectures, containers and related DevOps technologies and concepts. Experience with setting up server monitoring, alerting, logging and server provisioning. Experience with Terraform, Ansible, Docker, AWS CDK, CloudFormation, EKS/Kubernetes or similar technologies. Experience with developing and productionizing ML models. Experience with AIOps/MLOps - cloud application and infrastructure predictive analytics: catch platform API, server, job failures before they happen. You get excited when you hear about things like: LSM Trees, Bloom Filters, Reactive Manifesto, CAP theorem, write amplification, RocksDB, parachains, etc.