智能加速实验室
智能加速实验室
主页
研究简介
研究团队
发明专利
学术论文
招生指南
Zhikai Li
最新
MGRQ: Post-Training Quantization For Vision Transformer With Mixed Granularity Reconstruction
RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers
Patch Similarity Aware Data-Free Quantization for Vision Transformers
Patch-Wise Mixed-Precision Quantization of Vision Transformer
DCIFPN: Deformable cross-scale interaction feature pyramid network for object detection
Region Probability Map-Guided Fast Wide-Area Multiobject Detection
Mechanical Particle Filter Based Active Vision System for Fast Wide-Area Multi-Object Detection
Sparsity Induction for Accurate Post-Training Pruning of Large Language Models
SAQ-SAM: Semantically-Aligned Quantization for Segment Anything Model
DilateQuant: Accurate and Efficient Diffusion Quantization via Weight Dilation
LLM Inference Unveiled: Survey and Roofline Model Insights
Qft: Quantized full-parameter tuning of llms with affordable resources
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers
BinaryViT: Towards Efficient and Accurate Binary Vision Transformers
Rethinking Prediction Alignment in One-stage Object Detection
Dual-Discriminator Adversarial Framework for Data-Free Quantization
Hardware-Oriented Algorithm for High-Speed Laser Centerline Extraction Based on Hessian Matrix
K-Sort Eval: Efficient Preference Evaluation for Visual Generation via Corrected VLM-as-a-Judge
PTQ4ARVG: Post-Training Quantization for AutoRegressive Visual Generation Models
Efficient-SAM2: Accelerating SAM2 with Object-Aware Visual Encoding and Memory Retrieval
CacheQuant: Comprehensively Accelerated Diffusion Models
K-sort arena: Efficient and reliable benchmarking for generative models via k-wise human preferences
DilateQuant: Accurate and Efficient Diffusion Quantization via Weight Dilation
RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization
EDA-DM: Enhanced distribution alignment for post-training quantization of diffusion models
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
引用
×