Tech

OpenAI loses another lead safety researcher, Lilian Weng

One other one in every of OpenAI’s lead security researchers, Lilian Weng, introduced on Friday she is departing the startup. Weng served as VP of analysis and security since August, and earlier than that, was the pinnacle of the OpenAI’s security techniques workforce.

In a put up on X, Weng mentioned that “after 7 years at OpenAI, I really feel able to reset and discover one thing new.” Weng mentioned her final day might be November fifteenth, however didn’t specify the place she’s going to go subsequent.

“I made the extraordinarily troublesome determination to go away OpenAI,” mentioned Weng within the put up. “Taking a look at what we’ve got achieved, I’m so pleased with everybody on the Security Techniques workforce and I’ve extraordinarily excessive confidence that the workforce will proceed thriving.”

Weng’s departure marks the most recent in an extended string of AI security researchers, coverage researchers, and different executives who’ve exited the corporate within the final yr, and a number of other have accused OpenAI of prioritizing business merchandise over AI security. Weng joins Ilya Sutskever and Jan Leike – the leaders of OpenAI’s now dissolved Superalignment workforce, which tried to develop strategies to steer superintelligent AI techniques – who additionally left the startup this yr to work on AI security elsewhere.

Weng first joined OpenAI in 2018, in line with her LinkedIn, engaged on the startup’s robotics workforce that ended up constructing a robotic hand that would resolve a Rubik’s dice – a activity that took two years to attain, in line with her put up.

As OpenAI began focusing extra on the GPT paradigm, so did Weng. The researcher transitioned to assist construct the startup’s utilized AI analysis workforce in 2021. Following the launch of GPT-4, Weng was tasked with making a devoted workforce to construct security techniques for the startup in 2023. At present, OpenAI’s security techniques unit has greater than 80 scientists, researchers, and coverage specialists, in line with Weng’s put up.

That’s a number of AI security of us, however many have raised issues round OpenAI’s deal with security because it tries to construct more and more highly effective AI techniques. Miles Brundage, a longtime coverage researcher, left the startup in October and introduced that OpenAI was dissolving its AGI readiness workforce, which he had suggested. On the identical day, the New York Occasions profiled a former OpenAI researcher, Suchir Balaji, who mentioned he left OpenAI as a result of he thought the startup’s know-how would carry extra hurt than profit to society.

OpenAI tells TechCrunch that executives and security researchers are engaged on a transition to interchange Weng.

“We deeply admire Lilian’s contributions to breakthrough security analysis and constructing rigorous technical safeguards,” mentioned an OpenAI spokesperson in an emailed assertion. “We’re assured the Security Techniques workforce will proceed enjoying a key position in guaranteeing our techniques are protected and dependable, serving a whole bunch of tens of millions of individuals globally.”

Different executives who’ve left OpenAI in latest months embody CTO Mira Murati, chief analysis officer Bob McGrew, and analysis VP Barret Zoph. In August, the distinguished researcher Andrej Karpathy and co-founder John Schulman additionally introduced they’d be leaving the startup. A few of these of us, together with Leike and Schulman, left to hitch an OpenAI competitor, Anthropic, whereas others have gone on to start out their very own ventures.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button