DESCRIPTION:
The most recent release of OpenAI’s big language model, GPT-4, has been announced. which, according to the business, performs at “human level” on a variety of professional exams. With more weights in its model file and a bigger size than its predecessors, the most recent model is larger and more expensive to execute. The method utilised to create GPT-4 involves “scaling up” in order to get better results. which many experts in the area think is to blame for the most recent developments in AI. Microsoft invested billions in the firm, using Microsoft Azure to train GPT-4. It is known that the model was trained on thousands of supercomputers, which might cost tens of millions of dollars, even though OpenAI declined to provide specifics on the model size or hardware used in its development, citing “the competitive landscape”.
OPENAI: CHATTGPT IS BECOMING SO MUCH WIDER!
In the upcoming weeks, GPT-4 is anticipated to be the driving force behind several AI demonstrations. According to Microsoft, the AI chatbot for Bing is already utilising it. According to OpenAI, the new model will give fewer factually inaccurate responses and stray from the subject less frequently. On several exams, it will also outperform people, scoring in the 90th percentile on a mock bar exam, the 93rd percentile on a SAT reading test, and the 89th percentile on a SAT math test.
GPT-3.5 VS. GPT-4
While the differences between GPT-3.5 and GPT-4 might not be immediately apparent during informal conversation, GPT-4’s advantage becomes clear as the conversation progresses, according to OpenAI. According to OpenAI, more complicated artificial intelligence jobs arise, GPT-4 is predicted to outperform its predecessor in terms of dependability and inventiveness. Test data from OpenAI are also provided to back up this development, demonstrating how GPT-4 performs better than its predecessor in practically every area. This is a list of both the GPT-4 and GPT-3.5 test results:
Simulated exams | GPT-4 | GPT-4 (no vision) | GPT-3.5 |
Uniform Bar Exam (MBE+MEE+MPT) | 298 / 400~90th | 298 / 400~90th | 213 / 400~10th |
LSAT | 163~88th | 161~83rd | 149~40th |
SAT Evidence-Based Reading & Writing | 710 / 800~93rd | 710 / 800~93rd | 670 / 800~87th |
SAT Math | 700 / 800~89th | 690 / 800~89th | 590 / 800~70th |
Graduate Record Examination (GRE) Quantitative | 163 / 170~80th | 157 / 170~62nd | 147 / 170~25th |
Graduate Record Examination (GRE) Verbal | 169 / 170~99th | 165 / 170~96th | 154 / 170~63rd |
Graduate Record Examination (GRE) Writing | 4 / 6~54th | 4 / 6~54th | 4 / 6~54th |
USABO Semifinal Exam 2020 | 87 / 15099th–100th | 87 / 15099th–100th | 43 / 15031st–33rd |
USNCO Local Section Exam 2022 | 36 / 60 | 38 / 60 | 24 / 60 |
Medical Knowledge Self-Assessment Program | 75% | 75% | 53% |
Codeforces Rating | 392below 5th | 392below 5th | 260below 5th |
AP Art History | 586th–100th | 586th–100th | 586th–100th |
AP Biology | 585th–100th | 585th–100th | 462nd–85th |
AP Calculus BC | 443rd–59th | 443rd–59th | 10th–7th |
GPT-4 is not flawless, the firm cautions, and in many situations it performs worse than humans. The model is not always factually accurate and still exhibits “hallucination,” or making up facts. It is prone to asserting that it is correct, even when it is wrong. According to OpenAI, GPT-4 includes drawbacks including social biases, hallucinations, and hostile prompts that they are striving to fix.
Paid ChatGPT users will have access to the new paradigm, and programmers may incorporate it into their applications through an API. For every 750 words of prompts, OpenAI will charge roughly 3 cents, and for every 750 words of responses, they will charge about 6 cents.
In general, the introduction of GPT-4 marks a substantial advancement in the fields of AI and natural language processing. Despite having several drawbacks, the model’s capacity to perform at or above human levels on tests implies that it has the potential to be an important tool for a variety of applications, including chatbots, search engines, and more. OpenAI’s technology is always being improved and refined. And in the years to come, we may anticipate seeing even more remarkable advancements in AI.