Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Lexsi (Lexsi Labs)
[go: Go Back, main page]

\n \n \n
\n https://www.lexsi.ai\n

\n Paris 🇫🇷 · Mumbai 🇮🇳 · London 🇬🇧 \n

\n \n \n \n \n \n \n
\n\nLexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale. \n\n\n### Research Focus \n- **Aligned & Safe AI:** Frameworks for self-monitoring, interpretable, and alignment-aware systems. \n- **Explainability & Alignment:** Faithful, architecture-agnostic interpretability and value-aligned optimization across tabular, vision, and language models. \n- **Safe Behaviour Control:** Techniques for fine-tuning, pruning, and behavioural steering in large models. \n- **Risk & Governance:** Continuous monitoring, drift detection, and fairness auditing for responsible deployment. \n- **Tabular & LLM Research:** Foundational work on tabular intelligence, in-context learning, and interpretable large language models. \n\n\n\n","html":"
\n \n \n \n
\n https://www.lexsi.ai\n

\n Paris 🇫🇷 · Mumbai 🇮🇳 · London 🇬🇧 \n

\n \n \n \n \n \n \n
\n\n

Lexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale.

\n

Research Focus

\n\n","classNames":"hf-sanitized hf-sanitized-ImdKnoPaU9QcqxKJQ90s7"},"users":[{"_id":"63cbbcb9f488db9bb3beeaa1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/hR46Xb6Yb12CN0r8kU507.png","isPro":false,"fullname":"vinaykumar","user":"vinay-k12","type":"user"},{"_id":"68a20956846db9d4bade50ab","avatarUrl":"/avatars/7a6e832451be016ec89525c9287b0f39.svg","isPro":false,"fullname":"Aditya Tanna","user":"adityatannaarya","type":"user"},{"_id":"66fce04d927ec45504514afd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66fce04d927ec45504514afd/lA_bTErY7JywT6xbzdflo.jpeg","isPro":false,"fullname":"Pratinav Seth","user":"pratinavsetharya","type":"user"},{"_id":"69034a276a713a1daea676da","avatarUrl":"/avatars/c647adfe0b2853e78bf6c6ee913d20fe.svg","isPro":false,"fullname":"Mohamed Bouadi","user":"MohamedBouadi","type":"user"},{"_id":"697282c863f7ea6bf768dbae","avatarUrl":"/avatars/fe858a660ebf2566ef50ff0bd58ef873.svg","isPro":false,"fullname":"OMKAR KAKADE","user":"omkar1lexsi","type":"user"},{"_id":"695cd5f5198f31c8b6e660cd","avatarUrl":"/avatars/e40513612be1aebbe00131d3165fc00b.svg","isPro":false,"fullname":"Hem Gosalia","user":"Hem-LexsiAI","type":"user"}],"userCount":6,"collections":[{"slug":"Lexsi/lexsi-lab-papers-690b34f5479d2f239f90a8e3","title":"Lexsi Lab Papers","description":"","gating":false,"lastUpdated":"2026-02-11T11:56:36.461Z","owner":{"_id":"69034619c56aefa86350a727","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png","fullname":"Lexsi Labs","name":"Lexsi","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"items":[{"_id":"690b351d20a8322676462a59","position":0,"type":"paper","id":"2511.02818","title":"Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2511.02818.png","upvotes":15,"publishedAt":"2025-11-04T18:43:44.000Z","isUpvotedByUser":false},{"_id":"690b3530a52bbafc766a537d","position":1,"gallery":[],"type":"paper","id":"2511.02802","title":"TabTune: A Unified Library for Inference and Fine-Tuning Tabular\n Foundation Models","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2511.02802.png","upvotes":16,"publishedAt":"2025-11-04T18:25:17.000Z","isUpvotedByUser":false},{"_id":"690b3622c9ef9da7e5791439","position":2,"type":"paper","id":"2509.08592","title":"Interpretability as Alignment: Making Internal Understanding a Design\n Principle","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2509.08592.png","upvotes":0,"publishedAt":"2025-09-10T13:45:59.000Z","isUpvotedByUser":false},{"_id":"690b38fa20a8322676465bd7","position":3,"type":"paper","id":"2507.08330","title":"Interpretability-Aware Pruning for Efficient Medical Image Analysis","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2507.08330.png","upvotes":0,"publishedAt":"2025-07-11T05:58:22.000Z","isUpvotedByUser":false}],"position":0,"theme":"orange","private":false,"shareUrl":"https://hf.co/collections/Lexsi/lexsi-lab-papers","upvotes":0,"isUpvotedByUser":false},{"slug":"Lexsi/orion-tabular-foundation-models-690877e40138aa486ff4fbab","title":"Orion Tabular Foundation Models","description":"","gating":false,"lastUpdated":"2025-11-06T03:37:57.469Z","owner":{"_id":"69034619c56aefa86350a727","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png","fullname":"Lexsi Labs","name":"Lexsi","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"items":[{"_id":"6908780b500b120393159b55","position":0,"type":"model","author":"Lexsi","authorData":{"_id":"69034619c56aefa86350a727","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png","fullname":"Lexsi Labs","name":"Lexsi","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"downloads":1,"gated":false,"id":"Lexsi/Orion-MSP","availableInferenceProviders":[],"lastModified":"2025-12-02T13:33:04.000Z","likes":6,"private":false,"repoType":"model","isLikedByUser":false},{"_id":"6908842b9aa39b19d2e073ba","position":1,"type":"model","author":"Lexsi","authorData":{"_id":"69034619c56aefa86350a727","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png","fullname":"Lexsi Labs","name":"Lexsi","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"downloads":1,"gated":false,"id":"Lexsi/Orion-BiX","availableInferenceProviders":[],"lastModified":"2025-12-02T13:59:56.000Z","likes":1,"private":false,"repoType":"model","isLikedByUser":false}],"position":1,"theme":"orange","private":false,"shareUrl":"https://hf.co/collections/Lexsi/orion-tabular-foundation-models","upvotes":0,"isUpvotedByUser":false}],"datasets":[],"models":[{"author":"Lexsi","authorData":{"_id":"69034619c56aefa86350a727","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png","fullname":"Lexsi Labs","name":"Lexsi","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"downloads":1,"gated":false,"id":"Lexsi/Orion-BiX","availableInferenceProviders":[],"lastModified":"2025-12-02T13:59:56.000Z","likes":1,"private":false,"repoType":"model","isLikedByUser":false},{"author":"Lexsi","authorData":{"_id":"69034619c56aefa86350a727","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png","fullname":"Lexsi Labs","name":"Lexsi","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"downloads":1,"gated":false,"id":"Lexsi/Orion-MSP","availableInferenceProviders":[],"lastModified":"2025-12-02T13:33:04.000Z","likes":6,"private":false,"repoType":"model","isLikedByUser":false}],"paperPreviews":[{"_id":"2602.04521","title":"$C$-$ΔΘ$: Circuit-Restricted Weight Arithmetic for Selective Refusal","id":"2602.04521","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2602.04521.png"},{"_id":"2511.02818","title":"Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning","id":"2511.02818","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2511.02818.png"}],"spaces":[],"buckets":[],"numBuckets":0,"numDatasets":0,"numModels":2,"numSpaces":1,"lastOrgActivities":[{"time":"2026-02-11T12:34:30.464Z","user":"pratinavsetharya","userAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66fce04d927ec45504514afd/lA_bTErY7JywT6xbzdflo.jpeg","type":"paper","paper":{"id":"2602.04521","title":"$C$-$ΔΘ$: Circuit-Restricted Weight Arithmetic for Selective Refusal","publishedAt":"2026-02-04T13:10:52.000Z","upvotes":1,"isUpvotedByUser":true}},{"time":"2026-02-11T12:34:19.784Z","user":"pratinavsetharya","userAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66fce04d927ec45504514afd/lA_bTErY7JywT6xbzdflo.jpeg","type":"paper","paper":{"id":"2512.00181","title":"Orion-Bix: Bi-Axial Attention for Tabular In-Context Learning","publishedAt":"2025-11-28T19:42:09.000Z","upvotes":0,"isUpvotedByUser":false}},{"time":"2026-02-11T11:56:36.474Z","user":"pratinavsetharya","userAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66fce04d927ec45504514afd/lA_bTErY7JywT6xbzdflo.jpeg","orgAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png","type":"collection","collection":{"id":"690b34f5479d2f239f90a8e3","slug":"Lexsi/lexsi-lab-papers-690b34f5479d2f239f90a8e3","title":"Lexsi Lab Papers","description":"","lastUpdated":"2026-02-11T11:56:36.461Z","numberItems":8,"owner":{"_id":"69034619c56aefa86350a727","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png","fullname":"Lexsi Labs","name":"Lexsi","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"theme":"orange","shareUrl":"https://hf.co/collections/Lexsi/lexsi-lab-papers","upvotes":0,"isUpvotedByUser":false},"org":"Lexsi"}],"acceptLanguages":["*"],"canReadRepos":false,"canReadSpaces":false,"blogPosts":[],"currentRepoPage":0,"filters":{},"paperView":false}">

AI & ML interests

Frontier research around Safe and aligned intelligence

Recent Activity


https://www.lexsi.ai

Paris 🇫🇷 · Mumbai 🇮🇳 · London 🇬🇧

Lexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale.

Research Focus

  • Aligned & Safe AI: Frameworks for self-monitoring, interpretable, and alignment-aware systems.
  • Explainability & Alignment: Faithful, architecture-agnostic interpretability and value-aligned optimization across tabular, vision, and language models.
  • Safe Behaviour Control: Techniques for fine-tuning, pruning, and behavioural steering in large models.
  • Risk & Governance: Continuous monitoring, drift detection, and fairness auditing for responsible deployment.
  • Tabular & LLM Research: Foundational work on tabular intelligence, in-context learning, and interpretable large language models.

datasets 0

None public yet