SWE - Machine Learning Engineer, SIML
Apple
SummaryPosted: Oct 21, 2024Weekly Hours: 40 Role Number:200509737We work on the cutting edge of Artificial Intelligence and Machine learning to build intelligent system experiences for the world’s most impactful platforms such as iOS, macOS, tvOS, etc. This system-wide intelligence aims to provide best-in-class solutions for problems that are critical to the success of 1st and 3rd party applications in Apple platforms. Some examples of such areas include sharing suggestions, vector indexing and search, discovering and indexing people identities, social relationships, visual recognition of people and things, OCR, natural language generation, visual generation, etc. We are looking for highly skilled and creative ML practitioners who are well versed with using large language models (LLMs) for a variety of downstream tasks beyond language generation. Of particular interest is using LLMs to reason in a multi-modal setting, by combining imperfect visual perception with contextual information derived from the system. We are the Human and Object Understanding (HOUr) team within the System Intelligence and Machine Learning (SIML) group. We are an applied R&D team that develops fundamental ML technologies and systems for visual perception and reasoning of humans-in-context. Some examples of visual perception technologies the team own include real-time always-on object detection (Center Stage, Cinematic Mode), end-end system-wide person recognition (Photos, HomeKit, Memojis, Apple Pay), spoof detection (IDs in Wallet), and gaze understanding (Center Stage, intelligent cropping). Some examples of high-level reasoning systems include: sharing suggestions, inferring name-person relationships, and efficient vector indexing.DescriptionDescriptionAs a ML engineer in the SIML HOUr team, you will work with large-language models and multi-modal generative models, following closely groundbreaking advancements in this domain, to adjust and apply them to internal use cases. One main mission of the role is building adapters on top of large models to enable specific use cases, having a direct impact on features across the Apple ecosystem. The work will involve translating high-level product goals into different levels of the stack. From defining the data needs, manipulating data, fine-tuning pre-trained models for the task, evaluating it across relevant metrics and power and performance, prototyping and delivering it for integration. The work will be multi-functional, collaborating with ML researchers, software engineers, product design, and other teams. Be expected to iterate quickly to deliver a high quality model, that is performant, reliable, tested extensively, and documented. Apart from model development, the role will also give the experience of scoping projects, estimating timelines, multi-functional planning and presenting your work to organization leadership. If this could be of interest, please apply!Minimum QualificationsMinimum QualificationsHands-on experience with LLM-based workflows, including prompt engineering and parameter-efficient fine-tuning of pre-trained models.Proficient in multi-modal settings, specifically integrating Vision and Language.Capable of translating high-level product goals into specific data, model, and metrics requirements.Strong communication skills for effective collaboration across multiple teams, with a keen awareness of model complexity, power, and performance.Key QualificationsKey QualificationsPreferred QualificationsPreferred QualificationsMaster's or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field.Familiarity with Python, PyTorch, TensorFlow.Hold yourself and others to a high bar when delivering a model.Ability to rapidly iterate with fine-tuning toolboxes.Education & ExperienceEducation & ExperienceAdditional RequirementsAdditional RequirementsMore
Confirm your E-mail: Send Email
All Jobs from Apple