OpenAI Intros New Generative Text Features While Reducing Pricing


OpenAI is releasing a variant of GPT-3.5-turbo with a significantly larger context window
OpenAI Intros New Generative Text Features While Reducing Pricing
Spread the love

Wednesday, June 14, 2023

New versions of GPT-3.5-turbo and GPT-4, the latter of which is OpenAI’s most recent text-generating AI and features function calling, were released today, according to the company’s announcement. Function calls, as described by OpenAI in a blog post, enable developers to specify programming functions to GPT-3.5-turbo and GPT-4 and have the models generate code to carry out those functions.

Function calling, for instance, can be used to build chatbots that respond to requests by invoking external tools, translate natural language into database queries, and extract structured data from text. Developers can more reliably retrieve structured data from models via function calling.

In addition to function calls, OpenAI is releasing a variant of GPT-3.5-turbo with a significantly larger context window. The text that the model takes into account before producing any new text is referred to as the context window and is measured in tokens or raw bits of text. Models with limited context windows tend to “forget” the details of even recent discussions, which causes them to stray—often problematically. 

The new GPT-3.5-turbo provides four times the context length (16,000 tokens) of the first GPT-3.5-turbo for twice the price ($0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens, respectively). Compared to the hundreds of pages that AI company Anthropic’s lead model can process, OpenAI claims that it can manage just roughly 20 pages of text at once. (OpenAI is testing GPT-4 with a 32,000-token context window in a limited-release version.)

Positively, OpenAI announces a 25% price reduction for GPT-3.5-turbo, the original version without the larger context window. The model is now available to developers for $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens, or about 700 pages per $1.

See also  AI for Marketing: How You Should Build an AI Marketing Strategy in 2023

The cost of text-embedding-ada-002, one of OpenAI’s more well-liked text embedding models, is also going down. Text embeddings are often used for search (where results are ordered by relevance to a query string) and recommendations (where items with related text strings are recommended). They quantify the relatedness of text strings.

The price of text-embedding-ada-002 has been reduced by 75% to $0.0001 for every 1,000 tokens. OpenAI claims that the decrease was made possible by improved system efficiency; this is undoubtedly an important area of focus for the business, which invests hundreds of millions of dollars in infrastructure and R&D.

Following the publication of GPT-4 in early March, OpenAI has indicated that incremental upgrades to existing models—rather than enormous new from-scratch models—are its MO. CEO Sam Altman reiterated at a recent Economic Times conference that OpenAI hasn’t started training the replacement for GPT-4 and that the business “has a lot of work to do” before it starts that model.

[Source: Techcrunch.com]


Spread the love

Suraj Verma

As a highly skilled and experienced content writer, I have a passion for creating engaging and informative content that connects with audiences and inspires them to take action. With over 1 year of experience in the industry, I have honed my writing skills to craft content that is both effective and SEO-friendly.