Engineering Manager, Cloud Inference

New Yesterday

About the role We are seeking an experienced Engineering Manager to lead the Cloud Inference Capacity & Operations team for Vertex. You will lead your team to scale and optimize Claude to serve the massive audiences of developers and enterprise companies using GCP. You and your team will optimize the consumption of accelerators. Our team ensures our LLMs meet rigorous performance, safety and security standards and enhance our core infrastructure for packaging, testing, and deploying inference technology across the globe. Your work will increase the scale at which our services can operate and accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms. Responsibilities: Set technical strategy and oversee development of high scale, reliable infrastructure systems Collaborate across teams across companies to deeply understand infrastructure, operations and capacity needs, identifying potential solutions to support frontier LLM serving Create clarity for the team and stakeholders in an ambiguous and evolving environment Take an inclusive approach to hiring and coaching top technical talent, and support a high performing team Design and run processes (e.g. postmortem review, incident response, on-call rotations) that help the team operate effectively and never fail the same way twice You may be a good fit if you: Have 10+ years of experience in high-scale, high-reliability software development, particularly infrastructure or capacity management Have 3+ years of engineering management experience Experience recruiting, scaling, and retaining engineering talent in a high growth environment Have experience scaling resources and operations to accommodate rapid growth Are deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development Excel at building strong relationships with stakeholders at all levels and across companies Enjoy working in a fast-paced, early environment; comfortable with adapting priorities as driven by the rapidly evolving AI space Have excellent written and verbal communication skills and comfort with a high degree of collaboration with both internal and external engineers and product managers Demonstrated success building a culture of belonging and engineering excellence Are motivated by developing AI responsibly and safely Strong candidates may also have experience with: Experience with machine learning infrastructure like GPUs, TPUs, or Trainium, as well as supporting networking infrastructure like NCCL Experience with deployment and capacity management automation Security and privacy best practice expertise Logistics The expected salary range for this position is: $1 - $2 USD Education requirements We require at least a Bachelor\'s degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! We will make every reasonable effort to help with the visa process if we make you an offer. We encourage you to apply even if you do not meet every single qualification. We value diverse perspectives and believe AI work benefits from broad experience. How we’re different Anthropic is a public benefit corporation headquartered in San Francisco. We value impact and offer competitive compensation and benefits, flexible working hours, and a collaborative office environment. We encourage candidates to review our AI usage guidelines during the application process. Equal Employment Opportunity Anthropic is an equal opportunity employer. We do not discriminate on the basis of protected status. We encourage applicants from underrepresented groups to apply.
#J-18808-Ljbffr
Location:
San Francisco, CA, United States
Salary:
$200,000 - $250,000
Category:
Engineering