In the digital transformation era, Metaverse offers a fusion of virtual reality (VR), augmented reality (AR), and web technologies to create immersive digital experiences. However, the evolution of the Metaverse is slowed down by the challenges of content creation, scalability, and dynamic user interaction. Our study investigates an integration of Mixture of Experts (MoE) models with Generative Artificial Intelligence (GAI) for mobile edge computing to revolutionize content creation and interaction in the Metaverse. Specifically, we harness an MoE model's ability to efficiently manage complex data and complex tasks by dynamically selecting the most relevant experts running various sub-models to enhance the capabilities of GAI. We then present a novel framework that improves video content generation quality and consistency, and demonstrate its application through case studies. Our findings underscore the efficacy of MoE and GAI integration to redefine virtual experiences by offering a scalable, efficient pathway to harvest the Metaverse's full potential.
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video
pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe = pipe.to("cuda")
prompt = "<----------->"
video_frames = pipe(prompt).frames[0]
video_path = export_to_video(video_frames)
video_path
pip install vbench
git clone https://github.com/Vchitect/VBench.git
pip install -r VBench/requirements.txt
pip install VBench
python evaluate.py \
--dimension $DIMENSION \
--videos_path /path/to/folder_or_video/ \
--mode=custom_input
This graph showcases various models and tasks, each cited with relevant research papers. Below, find detailed information about each model and task, accompanied by their respective Refs.
Task: Data Augmentation - Object Detection. Ref: Data Augmentation for Intelligent Mechanical Fault Diagnosis Based on Local Shared Multiple-Generator GAN
Task: Conetent Generation - Image. Ref: MCL-GAN: Generative Adversarial Networkswith Multiple Specialized Discriminators
Task: Model Acceleration. Ref: DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale
Task: Conetent Generation - Image. Ref: STABLEMOE: Stable Routing Strategy for Mixture of Experts
Task: Model Acceleration; Conetent Generation - Image. Ref: M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Task: Image Classification. Ref: Scaling Vision with Sparse Mixture of Experts
If our work aids your research, please cite our work:
@misc{liu2024fusion,
title={Fusion of Mixture of Experts and Generative Artificial Intelligence in Mobile Edge Metaverse},
author={Guangyuan Liu and Hongyang Du and Dusit Niyato and Jiawen Kang and Zehui Xiong and Abbas Jamalipour and Shiwen Mao and Dong In Kim},
year={2024},
eprint={2404.03321},
archivePrefix={arXiv}
}