Unlock ChatGPT’s Potential: A Pro-Tip Guide

Yoriyasu Yano
IllumiTacit Blog
Published in
8 min readMay 30, 2023

--

Ever since its launch in November 2022, ChatGPT has been the talk of the whole world. ChatGPT talking back to us with surprisingly human-like responses to almost any question asked has dominated our collective zeitgeist. Will this technology replace us all? Is it just parroting things it’s seen before?

The reality, as usual, is somewhere in between.

With all the media attention, the hype around the technology has overblown users’ expectations. There is a lot of misunderstanding about what it might actually be good at.

Recent articles like this one, highlighting ChatGPT’s ability to pass standardized tests, may make you think that it’s so good that it can soon replace professional doctors and lawyers. However, naively using the model with these expectations will quickly disappoint you with its excessive tendency to confidently make things up.

This huge discrepancy between expectation and reality has driven many to dismiss it altogether, never realizing its full potential. While it is true that many of ChatGPT’s and other generative AI models’ capabilities have been exaggerated, there is no question that this technology can accomplish some amazing feats.

To effectively use ChatGPT, we must understand what it is, how it works, and from that, what is it good for.

What is ChatGPT, Really?

ChatGPT is a fine-tuned large language model (LLM) AI created by OpenAI. GPT stands for generative pre-trained transformer and is a kind of AI model that is trained on a large corpus of text (GPT-3, the last model prior to ChatGPT, was trained on 570GB text from the internet — around 400 billion words, or 800 million pages). In a nutshell, the models predict the most likely words that should come after the words you give it based on what the internet says.

One important detail — it predicts which word comes next based on all the words before (referred to as the “context”), not just the last word.

Obviously, this is a gross simplification and doesn’t do the technology justice on what goes on behind the scenes. If you are curious about the nitty gritty details, check out the technical paper for GPT-3.

For this article, this simplification will be sufficient to evaluate which tasks can leverage this technology.

What is ChatGPT good at?

To answer this question, it’s useful to look at two different aspects of large language models: what they know, and what can they do. These may seem redundant, but understanding the difference is key to using these tools effectively.

What does ChatGPT know?

With the knowledge that ChatGPT is just predicting what text probably follows the previous text, it’s easier to understand why it’s good at standardized tests. Given all the material about standardized tests on the internet, it’s easy to expect that many practice problems and past exams have leaked into ChatGPT’s training data. ChatGPT can draw on this when answering questions, letting it effectively predict the likely answer by remembering what it has seen before.

Similarly, this perspective helps in understanding its tendency to make things up (referred to as “hallucination”). ChatGPT isn’t trying to keep facts straight: it is just trying to make sure the output sounds believable. This is exacerbated by the Reinforcement Learning from Human Feedback (RLHF) training method for the model — which encourages the model to produce responses that typical human reviewers think sound good. In essence, ChatGPT is a master bullshitter, trained to make most people like what it has to say, regardless of the truth.

This doesn’t mean that ChatGPT is always factually incorrect. After all, it is trained on data from the public internet up to September 2021 including huge amounts of true and reliable information, and it can draw upon any of that data when responding. Indeed, asking who the president of the United States was in 1823, it will answer correctly: James Monroe.

What it means is that ChatGPT will tend to be more accurate when asked directly about generic concepts where there are many and consistent references of the concept on the internet. It will tend to fail if asked about specific details, or when it is asked to mix in facts in a larger piece where the context may steer it to hallucinate more. As an example, asking it to help write a legal brief will produce something that sounds right, but contains completely made up citations.

The key thing to know is: ChatGPT at its core, is a language focused model, NOT a fact focused knowledge model, or a sentient companion. Its knowledge is broad, but limited to generic information.

What can ChatGPT do?

Okay, the previous section doesn’t paint a particularly flattering picture of ChatGPT, however, that’s not the whole story. Remember, LLMs are just that: language models, not knowledge models. Their primary capability is in understanding language, not memorizing facts. All that training data has not only let the model learn some generic facts, more importantly it learns complicated relationships between words.

This is where the “context” we mentioned earlier comes in. The model is able to learn relationships between all the words in the context, not just between the last word and the next. This is the real superpower of large language models: learning to reason about the whole context at the same time lets these models produce output text that is self-consistent, cohesive, and generally “makes sense.”

Now, here comes the trick that has made large language models so useful. The model doesn’t need to create the entire context from scratch — instead we can give it a context to start with. This can be as simple as a question, or something much more complicated, like an entire essay. With this trick, we can steer the model to do what we want (i.e. prompting) and provide specific data and details it should use to accomplish the task. We can overcome the lack of specific knowledge the models have, by providing that specific knowledge directly in the context.

The key takeaway: ChatGPT and large language models are good at figuring out relationships within their context and creating new output that is consistent with the context. We can use this by providing relevant context to the model.

With that in mind, here are some examples of tasks that you should consider using ChatGPT and other language models for, and some tasks that you should avoid:

Examples of good tasks for ChatGPT

Summarizing text

Text summarization is one of the most common examples where ChatGPT performs exceptionally well. Summarizing text mostly requires extracting the key concepts and points from the text that is provided (the context!), and linking them together into a coherent block. This is primarily a language focused task and right in the wheelhouse of the model.

Practical use cases:

  • Writing the concluding paragraph of a blog post article
  • Writing the abstract of a scientific paper.

Rephrasing text in a certain style

ChatGPT is trained on a lot of text, which includes a wide range of writing styles, so it can map text effectively between different styles. This mapping process works well for rephrasing or translation.

By providing the desired style or characteristics, ChatGPT can suggest alternative phrasing or rephrase sentences to align with the specified style, while being consistent with the original text. It can be particularly useful for content creators, copywriters, or anyone who needs to adapt text to fit a particular audience or brand.

Practical use cases:

  • Rephrasing text to match a specific brand voice or tone.
  • Modifying technical or complex language to be more accessible and understandable for a general audience.
  • Transforming text to emulate a specific writing style, such as mimicking the tone of a renowned author or matching the style of a particular time period.

Researching background knowledge for a new concept

The diverse source of text data that has been used to train ChatGPT helps it provide information and insights on a wide range of topics. It can combine this with its ability to rephrase text to provide explanations of many complex topics in different styles and levels that are easy to understand.

With that said, you should stick to widely known concepts that have been written about extensively. It will stay mostly factual as long as it has enough past data for the requested topic. Otherwise, it will start to hallucinate facts. It is best to use this to gain a surface level understanding of unfamiliar topics, and then use more reliable sources to dig deeper.

Practical use cases:

  • Obtaining quick answers to general knowledge questions.
  • Exploring complex concepts or theories in simplified terms.
  • Gaining introductory insights into specific domains or industries.

Examples of poor tasks for ChatGPT

Open ended text generation tasks

ChatGPT will tend to struggle to generate content with limited context, such as asking it to “write a funny story.” While ChatGPT is a powerful generative AI, it relies heavily on the input it receives to generate relevant and coherent responses.

Open ended tasks rely heavily on creativity, imagination, and a deep understanding of the underlying task at hand. However, ChatGPT’s training primarily consists of analyzing existing text data from a structural as opposed to semantic standpoint, which is to say, it can learn which text probably belongs together, but not why it belongs together. This limitation makes it harder to generate novel or interesting content from scratch..

Practical examples:

  • Generating a patent from scratch or limited set of claims
  • Writing an engaging blog post from just the title
  • Writing a novel

Performing real life analysis

ChatGPT is not well-suited for performing real-life analysis in the same way that a human expert or specialized software might. While it can provide general information and insights based on patterns in its training data, it does not possess the ability to access real-time data or make informed judgments based on real-life observations.

Analyzing real-life situations often requires expertise in specific domains, practical experience, and access to up-to-date information. ChatGPT’s knowledge is based on its training data, which has a cutoff date and may not include the latest developments or nuanced details of specific events or situations.

Note also that many AI providers like OpenAI ban a lot of these use cases, especially if it relates to life or death situations. You can refer to their Usage Policies for details on banned use cases for the model.

Practical examples:

  • Diagnosing medical conditions
  • Providing legal advice

Summary

Separating the hype from reality when it comes to ChatGPT and related technologies can be tricky. Understanding what it is, how it works, and its optimized tasks are essential for leveraging its full potential.

ChatGPT is a fine-tuned language AI model, trained to predict text based on vast amounts of training data. It excels at tasks like summarizing text, rephrasing in different styles, and providing background knowledge on widely known concepts. These applications leverage its language-focused nature and the extensive information available on the internet.

On the other hand, ChatGPT struggles with open-ended tasks that require creativity and deep understanding. It’s also not suitable for real-life analysis that demands expertise, real-time data, and informed judgments.

Hopefully these examples can help guide picking suitable tasks where you can leverage these models. By recognizing ChatGPT’s strengths and limitations, you can harness its potential effectively. Pairing ChatGPT’s capabilities with your critical thinking, providing it context you know is relevant, and verifying the results with reliable sources ensures a balanced and accurate utilization of this powerful technology.

If you want a no-code way to leverage ChatGPT and similar models in your team while preserving data privacy, try out IllumiTacit. IllumiTacit allows you to share your prompts as action buttons that are accessible in a wide range of apps like Office 365 and Google Chrome, all without writing a single line of code. Supercharge your team with AI effortlessly. Join now!

--

--

Staff level Startup Engineer with 10+ years experience (formerly at Gruntwork)