Cisco ThousandEyes is a Digital Experience Assurance platform that empowers organizations to deliver flawless digital experiences across every network – even the ones they don’t own. Powered by AI and an unmatched set of cloud, internet and enterprise network telemetry data, ThousandEyes enables IT teams to proactively detect, diagnose, and remediate issues – before they impact end- user experiences.
ThousandEyes is deeply integrated across the entire Cisco technology portfolio and beyond, helping customers deploy at scale while also delivering AI-powered assurance insights within Cisco’s leading Networking, Security, Collaboration, and Observability portfolios.
What You’ll DoYou'll be working in the Endpoint team, our goal is to ensure our customers have an unparalleled ability to quickly troubleshoot network issues and do long-term analysis of end-user experience. We equip our customers with complete visibility into end-user connectivity, wherever they may be located, whether internal or cloud apps. You’ll help build and scale our backend infrastructure, which ingests, aggregates, and stores all the data our agents collect.
As our footprint grows, you will join a team of operationally-minded engineers and help us tackle data processing and availability at scale.
As part of this role, you will also be responsible for maintaining Endpoint services in a FedRAMP compliant environment and therefore, must be a U.S. Person (i.e. U.S. citizen, U.S. national, lawful permanent resident, asylee, or refugee). This position may also perform work that the U.S. government has specified can only be performed by a U.S. citizen on U.S. soil.
An experienced Software Engineer with excellent knowledge of computer science fundamentals.
Strong knowledge of JVM languages such as Kotlin or Java.
Experience with developing and maintaining large-scale distributed production systems.
Experience with on-call and production incident resolution
Preferred QualificationsUnderstanding of networking fundamentals
Familiar with Docker, Kubernetes and cloud technologies such as AWS.
Bachelor’s degree in Computer Science or similar.
Experience with streaming architectures such as Kafka, Flink or Spark.
Nice to have qualificationsExperience working on IoT products
Experience with processing large data sets on Elasticsearch or similar data stores
Experience with event-driven architectures (CQRS, Kafka Streams etc)