-
Huggingface
Stars
AI research agent that reads papers, runs experiments, and tells you what worked!
Chain apps and models to build robust AI workflows 🤗
Glance: Accelerating Diffusion Models with 1 Sample
Additional non-node based UI for ComfyUI focused on inference. Stable UI states; presets; and advanced queue. Based on Gradio
Light Image Video Generation Inference Framework
A lightweight, local-first, and 🆓 experiment tracking library from Hugging Face 🤗
Generate a comprehensive review from an arXiv paper, then turn it into a blog post. This project powers the website below for the HuggingFace's Daily Papers (https://huggingface.co/papers).
Building Blocks for Multi-Modal Gradio Powered by Groq Apps
All credits go to HuggingFace's Daily AI papers (https://huggingface.co/papers) and the research community. 🔉Audio summaries here (https://t.me/daily_ai_papers).
HuggingFace Paper Explorer: Extended time range for top AI research papers.
A zero-dependency, platform-independent, and lightweight Gradio client.
Benchmarking LLMs with Challenging Tasks from Real Users
[SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data
InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥
Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllability
[NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning" for SAM.
Official implementation of "Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation"
Implementation of "SVDiff: Compact Parameter Space for Diffusion Fine-Tuning"
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
[ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators
Video-P2P: Video Editing with Cross-attention Control