Decoding the Magic of ChatGPT: A Technical Overview 🎩🤖

Atul Yadav
3 min readSep 8, 2023

--

Midjourney

1. Unveiling the Mystery: What Powers ChatGPT? 🧠

ChatGPT’s human-like responses aren’t magic but a result of advanced computational processes. It interprets your prompts and crafts answers, which often feel intuitive and human. So, how does it comprehend and produce text?

A. The Transformer Architecture: 🏛️

This structure processes copious amounts of unstructured data, particularly text. It operates through multiple parallel layers of mathematical equations, leading to significant breakthroughs in text generation.

2. The Evolution of GPT: From GPT-1 to GPT-4 🌱🌳

Though the GPT series is built on a consistent architecture, each successor incorporates more parameters and richer training datasets. Further enhancements were also added, especially in training methodologies like reinforcement learning from human feedback.

3. Understanding Data Transformation: Vectors, Matrices, & Tensors 🔢

These terms refer to units filled with numbers. The model processes them through several mathematical operations to arrive at the best possible outcomes.

A. Input & Output Values: 💬

Your prompt acts as the input, but there’s more to it. Let’s demystify how the model comprehends human language:

  • Generative Pre-trained Transformers (GPT): 🔄 We already discussed the Transformer. But what about the rest? Where does the model obtain its numerical data?
  1. Tokens: We convert text into small chunks, assigning each a unique identifier. They serve as numerical representations of language units.
  2. Tokenisation: Language can be split in various ways — sentences, words, sub-words, or characters.

B. Representing Tokens: 🏷️

Using GPT-2 as an example with its 50,257 tokens, how are these units portrayed post-tokenisation?

pythonCopy code
Sentence: "students celebrate the graduation with a big party"
Token labels: ['[CLS]', 'students', 'celebrate', 'the', 'graduation', 'with', 'a', 'big', 'party', '[SEP]']
Token IDs: tensor([[ 101, 2493, 8439, 1996, 7665, 2007, 1037, 2502, 2283, 102]])

4. Encoding and the Issue of Cardinality: 📜

Each token carries a unique ID, but how do we ensure one isn’t more crucial than another based solely on its number?

A. One-hot Encoding: 🌡️

A binary vector system, where each token is a vector with a singular non-zero element.

B. Embeddings: 📌

A more efficient approach. Tokens pass through an embedding layer, morphing into continuous vector representations of a fixed size.

pythonCopy code
Token Label: “party”
Token ID: 2283
Embedding Vector Length: 768
Embedding Tensor Shape: ([1, 10, 768])

5. Measuring Semantic Similarity: 📏

How do we discern the closeness of two language units contextually?

pythonCopy code
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
# Example embeddings
embedding_cat = np.array([0.5, 0.3, -0.1, 0.9])
embedding_dog = np.array([0.6, 0.4, -0.2, 0.8])
# Calculate cosine similarity
similarity = cosine_similarity([embedding_cat], [embedding_dog])[0][0]
print(f"Cosine Similarity between 'cat' and 'dog': {similarity:.4f}")

This calculation discerns that “cat” and “dog” are semantically closer than, say, “car” and “banana”.

6. Contextual Understanding of Sentences: 📖

Examining the varied meanings of “party” in two different sentences highlights the model’s capability to differentiate context.

The exploration into ChatGPT’s underlying tech unveils a fascinating interplay of math and language, leading to the production of near-human textual outputs. Stay tuned for the upcoming blog posts diving even deeper into this AI marvel! 🚀🌌

--

--

Atul Yadav
Atul Yadav

Written by Atul Yadav

MLOps | DataOps | DevOps Practitioner

No responses yet