Isabel Cachola, Kyle Lo, Arman Cohan, and Daniel Weld have created a program that reads a science article and outputs a single sentence that summarizes its content. The software utilizes neural networks trained on many examples and is designed to help researchers search through the huge number of published papers faster than looking at abstracts. They have also created a new multi-target dataset of 5.4K TLDRs over 3.2K papers called SCITLDR. They have proposed CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal.
