Download PDFOpen PDF in browser

Comparative Analysis of GPT-4 and BERT: Evaluating the Performance and Efficiency of Two Prominent Language Models

EasyChair Preprint 14798

6 pagesDate: September 11, 2024

Abstract

This research compares and contrasts GPT-4 and BERT, two important big language models in natural language processing (NLP). OpenAI’s GPT-4 was primarily developed to generate text, while Google's BERT focuses on understanding the meaning of text. The models are judged on their structure, training datasets, how well they do on several natural language processing (NLP) tasks, and how hard they are to compute. They were put through many tests on a standard dataset to see how well they did at tasks like classifying text, figuring out how people felt about it, and answering questions. The results display the pros and cons of each model, as well as how they can be used in different NLP situations.

Keyphrases: GPT-4, Model Architecture, Sentiment Analysis, large language models, training datasets

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14798,
  author    = {Himmat Rathore},
  title     = {Comparative Analysis of GPT-4 and BERT: Evaluating the Performance and Efficiency of Two Prominent Language Models},
  howpublished = {EasyChair Preprint 14798},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser