REN: Fast and Efficient Region Encodings from Patch-Based Image Encoders

Savya Khosla, Sethuraman T V, Barnett Lee, Alexander Schwing, Derek Hoiem
University of Illinois Urbana-Champaign
{savyak2, st34, bl29, aschwing, dhoiem}@illinois.edu

Representing images using region-based representations from REN (instead of conventional patch-based ones) improves performance across multiple tasks, while resulting in compact and content-aware representations.


Performance vs. Token Count

Performance vs. token count: REN matches the performance of the original image encoder (DINOv2), which uses 1369 tokens/image, with just 41 tokens/image, and surpasses it beyond that point. Performance stabilizes at 70 tokens/image. NTA denotes no token aggregation, i.e., 1369 tokens/image.

Abstract

We introduce the Region Encoder Network (REN), a fast and effective model for generating region-based image representations using point prompts. Recent methods combine class-agnostic segmenters (e.g., SAM) with patch-based image encoders (e.g., DINO) to produce compact and effective region representations, but they suffer from high computational cost due to the segmentation step. REN bypasses this bottleneck using a lightweight module that directly generates region tokens, enabling 60x faster token generation with 35x less memory, while also improving token quality. It uses a few cross-attention blocks that take point prompts as queries and features from a patch-based image encoder as keys and values to produce region tokens that correspond to the prompted objects. We train REN with three popular encoders—DINO, DINOv2, and OpenCLIP—and show that it can be extended to other encoders without dedicated training. We evaluate REN on semantic segmentation and retrieval tasks, where it consistently outperforms the original encoders in both performance and compactness, and matches or exceeds SAM-based region methods while being significantly faster. Notably, REN achieves state-of-the-art results on the challenging Ego4D VQ2D benchmark and outperforms proprietary LMMs on Visual Haystacks' single-needle challenge.

Method Overview

Point prompts interact with patch-based features through cross-attention blocks to produce region tokens. The training objective combines two components: (1) a contrastive loss that aligns region tokens with those generated from an augmented view of the same image, and (2) a feature similarity loss that aligns a linear projection of these tokens with average-pooled patch features obtained using SAM masks. REN eliminates the need for explicit segmentation at inference time while producing efficient and semantically rich region representations. We also show thresholded attention maps for three query points inside the cross-attention block, which show that the model learns to aggregate features primarily from the regions marked by the corresponding point prompts.

REN Overview

Tasks & Results


Visual Query Localization

REN can effectively localizes visual queries in long videos despite challenges like clutter, occlusions, background blending, motion blur, viewpoint changes, and brief visibility. On the Ego4D VQ2D benchmark, REN outperforms all existing approaches, including those specifically developed for this benchmark.


Query 1 Example 1
Query 2 Example 2
Query 3 Example 3
Query 4 Example 4
Query 5 Example 5

Method stAP tAP Success Recovery
SiamRCNN0.130.2141.634.0
CocoFormer0.180.2648.143.2
VQLoC0.240.3255.945.1
HERO-VQL0.280.3760.745.3
PRVQL0.280.3759.445.7
RELOCATE0.350.4360.150.6
REN0.400.5261.249.3

Semantic Segmentation

REN improves semantic segmentation performance across different image encoders. Its region tokens produce cleaner and less noisy predictions compared to the patch-based features used in DINOv2.


Semantic Segmentation

Method VOC2012 ADE20K
DINOv282.147.7
REN-DINOv286.550.9
DINO66.431.8
REN-DINO71.435.1
OpenCLIP71.439.3
REN-OpenCLIP78.042.8

Finding Needle in a Haystack

On the Visual Haystacks' single-needle challenge, REN outperforms proprietary LMMs, open-source LMMs, and RAG-based methods, especially for larger number of input images (denoted by N). "E" indicates context overflow, execution failure, or API error.

Method N=1 N=2 N=3 N=5 N=10 N=20 N=50 N=100 N=500 N=1K
Gemini 1.5 Pro88.482.078.376.071.968.662.857.4EE
GPT-4o82.579.977.573.368.265.459.755.3EE
LongVILA63.859.057.756.755.652.052.052.0EE
Qwen2-VL80.976.673.667.962.659.152.6EEE
Phi-380.569.167.362.054.852.650.8EEE
InternVL288.180.572.363.958.855.2EEEE
mPLUG-OWL384.466.062.157.053.251.5EEEE
LLaVA-v1.585.877.175.868.663.660.455.357.555.452.9
MIRAGE83.277.876.672.870.566.063.662.058.755.7
SigLIP 272.069.268.165.364.160.358.758.356.654.9
REN81.278.677.476.074.072.168.365.562.359.2
Single-Shot Object-Based Image Retrieval

Region-based methods outperform the patch-based baseline, and REN further surpasses the SAM-based baseline while offering faster and more efficient region token generation.

Retrieval
Method mAP mRP@50
DINOv20.130.33
SAM-DINOv20.450.58
REN0.520.65

BibTex

@inproceedings{khosla2025ren,
    title={REN: Fast and Efficient Region Encodings from Patch-Based Image Encoders},
    author={Savya Khosla and Sethuraman T V and Barnett Lee and Alexander Schwing and Derek Hoiem},
    journal={Neural Information Processing Systems},
    year={2025}
}