Researchers from Carnegie Mellon University have developed a new approach to natural language processing (NLP) challenges, called prompting, which uses pre-trained language models (LMs). This approach concatenates additional text with the inputs to guide the LM toward producing the desired outputs. The research formulates prompt optimization as a policy optimization problem to find the best prompts to enhance the LM’s performance across tasks.
