Margo has released two new multimodal models, Marqo-FashionCLIP and Marqo-FashionSigLIP, for fashion recommendation and search algorithms. These models use a combination of visual and textual data to provide accurate and personalized recommendations for users. The team used a multi-part loss optimization technique to fine-tune the models, resulting in better search results for shorter descriptive text and keyword-like material. The models were evaluated on seven publicly available fashion datasets, showing promising results for downstream activities.