Content moderation
If your company is built on the premise of content creation for the masses, you know that one wrong image, video, or podcast uploaded to your site is a recipe for disaster. Judging risky or dangerous content at scale is impossible to conduct manually while ensuring the safety and security of all your users. This application uses a CLIP model to assess the temperament of different cat images as a proxy for potentially problematic content. If labeled "dangerous," the model recommends pulling the cat image from the platform.
Use cases
Content Moderation
Resources
PostgreSQL
CLIP
.jpg)
Explore more

Speech to text
Use OpenAI Whisper for incredibly accurate speech-to-text in dozens of languages.

Image generation
Use Stable Diffusion to generate original images from a text prompt.

Deploy XGBoost model
Serve an XGBoost model behind a REST API endpoint.

Deploy Keras model
Serve a Keras model behind a REST API endpoint.