An independent group of researchers known as LAION have released a paper detailing how they improved OpenAI’s CLIP system, a model consisting of image and text encoding models that can be used for various forms of cross-modal comparison. By training with a larger dataset and using new ML techniques, LAION was able to improve the system and call it OpenCLIP. Experiments with the two systems have shown promising results.
