Bloomberg’s Engineering AI department has 350+ AI practitioners building highly sought after products and features that often require novel innovations. We are investing in AI to build better search, discovery, and workflow solutions using technologies such as transformers, gradient boosted decision trees, large language models, and dense vector databases. We are expanding our group and seeking highly skilled individuals who will be responsible for contributing to the team (or teams) of Machine Learning (ML) and Software Engineers that are bringing innovative solutions to AI-driven customer-facing products.
At Bloomberg, we believe in fostering a transparent and efficient financial marketplace. Our business is built on technology that makes news, research, financial data, and analytics on over 35 million financial instruments searchable, discoverable, and actionable across the global capital markets.
Bloomberg has been building Artificial Intelligence applications that offer solutions to these problems with high accuracy and low latency since 2009. We build AI systems to help process and organize the ever-increasing volume of structured and unstructured information needed to make informed decisions. Our use of AI uncovers signals, helps us produce analytics about financial instruments in all asset classes, and delivers clarity when our clients need it most.
The advent of large language models (LLMs) presents new opportunities for expanding our NLP capabilities in our products. As a Senior LLM Platform Engineer in the AI Department, you will have the opportunity to make key technical decisions which help define the future of infrastructure for LLM training and inference at Bloomberg!
Join the AI Department as a Senior LLM Engineer and you will have the opportunity to:
You'll need to have: 4+ years of programming experience with an object-oriented programming language A degree in Computer Science, Engineering or similar field of study or equivalent work experience Experience in designing, developing, and supporting ML applications Understanding of ML model architectures, specifically, transformers, and underlying computations Experience in profiling, benchmarking and optimizing ML applications
We’d love to see: Deep working proficiency in Python Experience in using HPC compute platforms and understanding anatomy of distributed computations Proficiency with various ML accelerators (NVIDIA GPUs, TPUs, other vendor ASICs) and building efficient workloads for them Solid understanding of networking (Infiniband, AWS EFA, RoCE) Experience managing infrastructure in Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
We give back to the technology community and you can read more about our outreach at: http://www.techatbloomberg.com/ai
The referenced salary range is based on the Company's good faith belief at the time of posting. Actual compensation may vary based on factors such as geographic location, work experience, market conditions, education/training and skill level.
We offer one of the most comprehensive and generous benefits plans available and offer a range of total rewards that may include merit increases, incentive compensation, [Exempt roles only], paid holidays, paid time off, medical, dental, vision, short and long term disability benefits, 401(k) +match, life insurance, and various wellness programs, among others. The Company does not provide benefits directly to contingent workers/contractors and interns.