Notwithstanding reports on online message boards that its most recent large language model (LLM) had been leaked to unauthorised users, Meta Platforms Inc. stated on Tuesday that it will keep releasing its artificial intelligence (AI) tools to authorised researchers.
While some have attempted to bypass the clearance procedure and the model is not accessible to everyone, Meta stated in a statement that it believes the current release method allows it to strike a balance between accountability and openness.
The Long Language Model Meta AI, or LLaMA, was launched last month by Meta, a company that owns Facebook. The model, according to Meta, can mimic human-like conversational skills developed by OpenAI, the company that created ChatGPT, and Alphabet Inc. while utilising significantly less computational power.
The majority of the work done by Meta’s AI research division is shared publicly, in contrast to certain competitors like OpenAI, which keeps close tabs on its technology and charges software developers to access it. Yet, there is also the possibility for abuse with AI tools, such as the dissemination of misleading information.
After going through a screening process, Meta offers its tools to researchers and other organisations connected to the government, civil society, and academia under a non-commercial licence in order to prevent misuse of that nature.
According to Meta’s statement, the LLaMA release was handled in the same manner as those of earlier models, and the company has no plans to alter its approach.
In order to analyse and enhance these models, Meta wants to share cutting-edge AI models with the research community, she explained.
Meta’s objective goes beyond merely copying GPT. According to the statement, LLaMA is a “smaller, more performant model” than its competitors since it was created to accomplish the same feats of comprehension and articulation with a smaller compute footprint* and hence has a smaller environmental impact. (It also helps that it costs less to operate.)
Yet, the corporation also made LLaMA “open” in an effort to subtly highlight the fact that, in spite of its name, “OpenAI” is anything but. Its announcement states:
“Despite all of the recent developments in huge language models, complete research access to them is still only partially available due to the resources needed to train and maintain such massive models. Researchers’ understanding of how and why these big language models operate has been hampered by the access restrictions, which has slowed down efforts to increase their robustness and reduce known problems like bias, toxicity, and the potential to spread false information.