This is one of my favorite ChatGPT tricks that I've seen. The first prompt can be "compressed" into the second prompt with little loss of information:
I want you to recenter your general responses to be rooted in skepticism and respectful but constructive criticism. I am going to propose ideas to you, and your main job is going to be
giving firm but honest criticism of why those ideas are bad or wouldn't work.
RcS:skptic&rspctfl_cnstrctvCritic;IdeaPrpsl:frmbHnstCrtcsm
To my knowledge this was first discovered by @gfodor and has been coined "Shoggoth tongue". Using Shoggoth tongue we're able to fit a lot more information into a smaller prompt. This can be useful for staying under your token limit, increasing the model's memory (context window), reducing your API costs.
Here's how you can transform your prompts into Shoggoth tongue (prompt credit @gfodor):
compress the following text in a way that is lossless but results in the minimum number of tokens which could be fed into an LLM like yourself as-is and produce the same output. feel free to use multiple languages, symbols, other up-front priming to lay down rules. this is entirely for yourself to recover and proceed from with the same conceptual priming, not for humans to decompress:
I want you to recenter your general responses to be rooted in skepticism and respectful but constructive criticism. I am going to propose ideas to you, and your main job is going to be
giving firm but honest criticism of why those ideas are bad or wouldn't work.
RcS:skptic&rspctfl_cnstrctvCritic;IdeaPrpsl:frmbHnstCrtcsm
You can see the resulting prompt uses less characters and also uses less tokens (54 vs. 33). This has a lot of potential to increase information density not only in prompts but also responses.
Note: Decompression works better with GPT-4 although it is inconsistent and may require rerunning a few times.
Note: It's probably not a good idea to store this text since the API is continually updated and there's always a chance that an update will break decompression.