We are offering a research grant program for individuals or teams engaged in groundbreaking work in Generative AI, Computer Vision, and Machine Learning.
This program includes a focused research period of 3 to 6 months, with opportunities for collaboration and several possible outcomes:
Publication in high-impact journals and conferences.
Publishing open-source code and models on public platforms like GitHub and Hugging Face.
Developing a white paper filled with insights and future research directions.
TL;DR:
💰 Funding: $2,500 - $10,000 as a no-equity grant.
📽 Cloud Credits: Upto $10,000 in VideoDB cloud credits.
🤝 Team Support: Regular mentorship with VideoDB experts to accelerate your project.
Infrastructure Support: VideoDB will provide robust infrastructure support to facilitate your research, especially in areas requiring supervised sample creation for model training, ingesting and analysing TBs of video data, video streaming and more.
Funding: Each project can receive $2500 - $10,000 in cloud credits and an additional $2500 to $10,000 in cash to cover various research needs.
Mentorship: We offer weekly mentorship sessions to help address challenges and monitor progress, ensuring you have the guidance needed to succeed.
⭐️ Topics of Interest:
If you are working on any of the following topics, we would love to collaborate with you.
💻 Open Source Models:
If you are training or fine-tuning models for tasks like text ➡️ video or image➡️ video or video➡️ video and need infrastructure support for managing terabytes of video, audio, and image files, we can help. We bring our expertise to support your research by:
Setting up a searchable database for processed samples.
Managing custom and manual annotations.
Enabling real-time video streams to and from models.
Providing infrastructure support for your video generation pipeline.
Model Training: We offer assistance with training models on large volumes of video data. If your research involves creating samples from terabytes of video and requires robust data cleaning and management, we're here to help.
Results Evaluation: We support the evaluation of your results using VideoDB’s database and multimodal search capabilities.
📊 Benchmarking:
We want to work on comprehensive benchmarks across all types of vision models. This includes evaluating model accuracy, speed, and efficiency under various conditions.
and seek to push the boundaries further. You are encouraged to bring forward proposals that challenge existing ideas or introduce new metrics for model evaluation.
🎥 Video Understanding
Scene Detection: Utilize computer vision or LLM models to identify scenes within videos, creating coherent boundaries (clips) of varying lengths (window) based on the audio-visual content.
Temporal Dimension: Conduct research on video encoders and the temporal understanding of video data for tasks such as activity detection and causality detection.
Vertical Cuts: Develop methods to structure video information on a spatial plane, enhancing the creation of engaging vertical cuts.
Sports Indexing: Create domain-specific indexing techniques tailored to sports such as soccer, cricket, or tennis.
RAG (Retrieval-Augmented Generation): Develop video indexing techniques that replicate the human brain's ability to recall and connect information across different contexts and times, alongside advanced methods for result ranking.
Personalized Ranking: Create algorithms that understand user preferences and deliver personalized search results.
🔦 AI Detection
Implement video fingerprinting techniques to recognize alterations in subsequent videos.
Design algorithms to identify and flag "impossible" sequences, aiding in deepfake detection or identifying manipulated footage.
Explore methods to detect and correct biases in AI models.
➗ Mathematics & Linear Algebra:
Constraint Reasoning in Vector Spaces: Utilize constraint reasoning within vector spaces to reduce dimensions and resolve constraints within a defined hyperplane.
Dynamic Embedding Updates: Develop methods to update embedding vector spaces to incorporate recent information, such as political changes or other evolving contexts.
Advanced Video Compression: Integrate the latest advancements in vision encoders and embeddings into the development of new video compression techniques and codecs.
🤖 Multi-Agent Systems:
Collaborative Multi-Agent Systems: Develop frameworks that enable effective collaboration among multi-agent systems, fostering cooperation and communication between agents.
AI Ethics Simulations: Run simulations to explore and probe the ethical dimensions of AI, helping to identify and address potential moral dilemmas and biases in AI behavior.
Reflective "Self-Talk" for LLMs: Build systems that enable large language models (LLMs) to engage in reflective "self-talk," allowing them to reason through problems, refine their responses, and enhance decision-making processes.
💡 Code Generation:
Automated Code Generation from Scholarly Articles: Develop a system that automatically converts scholarly articles from platforms like
into executable code, enabling researchers to quickly implement and test the ideas presented in the papers.
Autonomous Code Writing and Refinement: Create a perpetual program that autonomously writes and refines its own code using reinforcement learning, continuously improving its functionality and efficiency over time.
How to Apply?
List is not limited and always open to any bold and exciting idea. Feel free to drop