EVERLYN ai RESEARCH
tHE EVERLYN TEAM HAS 400+ ai research papers.
over 500,000 citations.
Everlyn'S RESEARCH is CONCENTRATED in the AREAS OF AUTOREGRESSIVE VIDEO GENERATION, TOKENIZATION, MLLMS, masking, AND MULTI-MODULE VIDEO GENERATION.
Efficient autoregressive video generation with one multi-modal model through unstructured tokenization & masking
Research Paper
Towards the optimal solution to vector quantization: A distributional perspective
Research Paper
VideoGen-of-Thought: A collaborative multi-module video generation framework for long, reasona
Research Paper
Intervening anchor token is all need in alleviating hallucinations for MLLMs
Research Paper
Seeing clearly an answering incorrectly: a multi-modal robustness benchmark for MLMS on leading questions
Research Paper
VMAMBAR: Hybrid architecture for autoregressive text-to-image generation
Research Paper
DOCVID-1M: A large scale documentary video dataset with multi-modal captioning
Research Paper
Towards a generalized video editing framework for visual consistency and quality
Research Paper
On disentangling the visual effect of multi-modal large language models
Research Paper
Docs
Research
Team
Roadmap
Discord
X
Everworld
Everlyn-1
Everworld
Everlyn-1
Everworld
is coming
Enter your email to whitelist for the first
virtual world of video agents.
You've been added to the whitelist!
Oops! Something went wrong.
Be the first to join Everworld.
Show it to me!
© Copyright Everlyn 2024. All rights reserved.
Privacy
Terms
Contact