Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Local All-Pair Correspondence for Point Tracking
[go: Go Back, main page]

https://ku-cvlab.github.io/locotrack/

\n","updatedAt":"2024-07-23T05:28:00.910Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9179,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.28397348523139954},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"66a059d2b87f88e8aeff17ac","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-07-24T01:33:06.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Long-Term 3D Point Tracking By Cost Volume Fusion](https://huggingface.co/papers/2407.13337) (2024)\n* [Decomposition Betters Tracking Everything Everywhere](https://huggingface.co/papers/2407.06531) (2024)\n* [Learning Spatial-Semantic Features for Robust Video Object Segmentation](https://huggingface.co/papers/2407.07760) (2024)\n* [SRPose: Two-view Relative Pose Estimation with Sparse Keypoints](https://huggingface.co/papers/2407.08199) (2024)\n* [Training-Free Robust Interactive Video Object Segmentation](https://huggingface.co/papers/2406.05485) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-07-24T01:33:06.230Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7194408178329468},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2407.15420","authors":[{"_id":"669f3f5a3e173b3293dfafc0","user":{"_id":"602e45160daeb0df2a81b244","avatarUrl":"/avatars/f6bf69f0c1342f8cfad05d5775e59bf4.svg","isPro":true,"fullname":"Seokju Cho","user":"hamacojr","type":"user"},"name":"Seokju Cho","status":"claimed_verified","statusLastChangedAt":"2024-07-24T07:27:15.383Z","hidden":false},{"_id":"669f3f5a3e173b3293dfafc1","user":{"_id":"644a717e75fce8ebef4e4955","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/zLga4NZBohFPlv50dcAo9.png","isPro":false,"fullname":"Jiahui Huang","user":"heiwang1997","type":"user"},"name":"Jiahui Huang","status":"admin_assigned","statusLastChangedAt":"2024-07-23T12:55:40.749Z","hidden":true},{"_id":"669f3f5a3e173b3293dfafc2","user":{"_id":"637c7420f219c71f93ec8f81","avatarUrl":"/avatars/969b72bd4320423af89e6a5d0ffa03cc.svg","isPro":false,"fullname":"frog","user":"frog123123123123","type":"user"},"name":"Jisu Nam","status":"admin_assigned","statusLastChangedAt":"2024-07-23T12:55:55.864Z","hidden":false},{"_id":"669f3f5a3e173b3293dfafc3","name":"Honggyu An","hidden":false},{"_id":"669f3f5a3e173b3293dfafc4","user":{"_id":"65cf717450818a335a1d3021","avatarUrl":"/avatars/382a0e0f40f661cda1b2531e3e6ea2ee.svg","isPro":false,"fullname":"Seungryong Kim","user":"seungryong","type":"user"},"name":"Seungryong Kim","status":"admin_assigned","statusLastChangedAt":"2024-07-23T12:56:10.746Z","hidden":false},{"_id":"669f3f5a3e173b3293dfafc5","user":{"_id":"64b76c7990b38df83381824b","avatarUrl":"/avatars/0a5bec2ea480fb3f43c4b24d59c50e81.svg","isPro":false,"fullname":"Joon-Young Lee","user":"joonyounglee","type":"user"},"name":"Joon-Young Lee","status":"admin_assigned","statusLastChangedAt":"2024-07-23T12:56:17.205Z","hidden":false}],"publishedAt":"2024-07-22T06:49:56.000Z","submittedOnDailyAt":"2024-07-23T03:58:00.906Z","title":"Local All-Pair Correspondence for Point Tracking","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"We introduce LocoTrack, a highly accurate and efficient model designed for\nthe task of tracking any point (TAP) across video sequences. Previous\napproaches in this task often rely on local 2D correlation maps to establish\ncorrespondences from a point in the query image to a local region in the target\nimage, which often struggle with homogeneous regions or repetitive features,\nleading to matching ambiguities. LocoTrack overcomes this challenge with a\nnovel approach that utilizes all-pair correspondences across regions, i.e.,\nlocal 4D correlation, to establish precise correspondences, with bidirectional\ncorrespondence and matching smoothness significantly enhancing robustness\nagainst ambiguities. We also incorporate a lightweight correlation encoder to\nenhance computational efficiency, and a compact Transformer architecture to\nintegrate long-term temporal information. LocoTrack achieves unmatched accuracy\non all TAP-Vid benchmarks and operates at a speed almost 6 times faster than\nthe current state-of-the-art.","upvotes":6,"discussionId":"669f3f5c3e173b3293dfb03e","githubRepo":"https://github.com/ku-cvlab/locotrack","githubRepoAddedBy":"auto","ai_summary":"LocoTrack enhances video point tracking using 4D correlation and a compact Transformer to achieve high accuracy and speed.","ai_keywords":["local 4D correlation","bidirectional correspondence","correlation encoder","Transformer architecture","TAP-Vid benchmarks"],"githubStars":208},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6258561f4d4291e8e63d8ae6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/rBcVzpNkUzB0ZTNJnyUDW.png","isPro":true,"fullname":"Sylvestre Bcht","user":"Sylvestre","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"65cf717450818a335a1d3021","avatarUrl":"/avatars/382a0e0f40f661cda1b2531e3e6ea2ee.svg","isPro":false,"fullname":"Seungryong Kim","user":"seungryong","type":"user"},{"_id":"64b76c7990b38df83381824b","avatarUrl":"/avatars/0a5bec2ea480fb3f43c4b24d59c50e81.svg","isPro":false,"fullname":"Joon-Young Lee","user":"joonyounglee","type":"user"},{"_id":"602e45160daeb0df2a81b244","avatarUrl":"/avatars/f6bf69f0c1342f8cfad05d5775e59bf4.svg","isPro":true,"fullname":"Seokju Cho","user":"hamacojr","type":"user"},{"_id":"659cb6cc38186a51f122689e","avatarUrl":"/avatars/11c33c81e87f55091b672c64f7c743d3.svg","isPro":false,"fullname":"Park JuHoon","user":"J4BEZ","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2407.15420

Local All-Pair Correspondence for Point Tracking

Published on Jul 22, 2024
· Submitted by
AK
on Jul 23, 2024

Abstract

LocoTrack enhances video point tracking using 4D correlation and a compact Transformer to achieve high accuracy and speed.

AI-generated summary

We introduce LocoTrack, a highly accurate and efficient model designed for the task of tracking any point (TAP) across video sequences. Previous approaches in this task often rely on local 2D correlation maps to establish correspondences from a point in the query image to a local region in the target image, which often struggle with homogeneous regions or repetitive features, leading to matching ambiguities. LocoTrack overcomes this challenge with a novel approach that utilizes all-pair correspondences across regions, i.e., local 4D correlation, to establish precise correspondences, with bidirectional correspondence and matching smoothness significantly enhancing robustness against ambiguities. We also incorporate a lightweight correlation encoder to enhance computational efficiency, and a compact Transformer architecture to integrate long-term temporal information. LocoTrack achieves unmatched accuracy on all TAP-Vid benchmarks and operates at a speed almost 6 times faster than the current state-of-the-art.

Community

Paper submitter

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.15420 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.15420 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.