BusinessUncategorized

Two months after its debut, Stanford researchers shut down OpenAI that resembles ChatGPT.

Stanford University artificial intelligence (AI) researchers created the ChatGPT demo chatbot Alpaca in less than two months, but they later abandoned it due to “hosting costs and the inadequacies of content filters” in the behaviour of the large language model (LLM).

According to Stanford Daily, the termination statement was made less than a week after it was made public. The ChatGPT model from Stanford, which was created for less than $600, has openly accessible source code. Researchers found that their robot model performed similarly to OpenAI’s ChatGPT 3.5.

In their statement, scientists stated that their chatbot, Alpaca, is currently only intended for use in academic settings and won’t be available to the general public.

“We think the interesting work is in developing methods on top of Alpaca [since the dataset itself is just a combination of known ideas], so we don’t have current plans along the lines of making more datasets of the same kind or scaling up the model,” said Alpaca researcher Tatsunori Hashimoto of the Computer Science Department.

Alpaca was created using the LLaMA 7B model from Meta AI, and it produced training data using the self-instruct technique. As adjunct lecturer Douwe Kiela put it, “The race was on as soon as the LLaMA model came out.”

“Somebody was going to be the first to instruction-finetune the model, and so the Alpaca team was the first,” said Kiela, who was also an AI expert at Facebook. And for that reason, it gained some notoriety. “They executed it really well,” I thought. “It’s a really, really cool, simple idea.

” According to Hashimoto, the LLaMA base model is modified by instruction-finetuning to favour completions that follow instructions over those that do not. This model is trained to predict the next word based on internet data. Alpaca’s source code is accessible on the source code sharing website GitHub and has received 17,500 views. The code has been used by more than 2,400 individuals to create their own models.

The base language model is still a major bottleneck, according to Hashimoto, “I think much of the observed performance of Alpaca comes from LLaMA.
” Scientists and experts have been debating the publication of the source code, data used by businesses and their methods to train their AI models, and the general transparency of the technology as the use of artificial intelligence systems has grown with each passing day.

According to him, keeping control of the technology out of the hands of too few people is one of the safest methods to move forward with it.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button