Skip to content
View Abd0r's full-sized avatar

Highlights

  • Pro

Block or report Abd0r

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Abd0r/README.md
Pixel Ghost

Hey, I'm Abdur πŸ‘‹

17 y/o Β· Independent Researcher Β· India
Building toward Artifical Intelligence's peak Β· One primitive at a time"

X Β  Hugging Face Β  Email Β  ORCID Β  PyPI


About Me

I'm a self-taught researcher working without a CS degree, a lab, or a team. I reverse-engineer how the brain works to build AI systems fundamentally different from today's transformers.

My work spans neural architectures, training frameworks, cognitive systems, and β€” most recently β€” post-CMOS computing hardware. Different domains, same approach: rebuild from first principles, and eliminate what everyone else assumed was necessary.


πŸ”¬ Published Work

βš›οΈ FEA β€” Free Electron Absorption Architecture

A transistor-free computing architecture on hydrogen-passivated silicon. Computation happens by resonant electron absorption in 5-atom dangling-bond clusters, not by switching. On a 3 cmΒ² die: 14.1 TB of in-situ memory at 79 mW β€” roughly 110Γ— the M4 Max memory, at ~0.2% of the power. Memory and compute are the same atoms β€” no cache, no DRAM bus. My first paper outside machine learning.

Β 

🧭 Quatrix β€” Q-Compass + SAVO Architecture

Empirical follow-up to Q-Compass. Three attention projections instead of four β€” content routes through a rank-r state βŠ™ action matrix grounded in RL navigation. 16Γ— smaller KV-cache than standard attention at ≀1.6 ppl penalty. One block class trains across text, vision, audio, world states, and cancer genomics.

Β  Β  Β 

🧠 Artificial Neural Mesh (ANM) V0

A modular multi-agent cognitive architecture featuring 12 specialized domain experts collaborating through Web-of-Thought (WoT) reasoning.

Β 

⚑ GEKO β€” Gradient-Efficient Knowledge Optimization

A plug-and-play fine-tuning framework that skips samples the model already knows β€” routing compute to hard samples and freezing mastered ones. Up to 80% compute savings at scale.

Β  Β  Β 

πŸš€ What's Next

NanoG1

NanoG1 β€” the next chapter. Building on what Quatrix and FEA established. Details soon.


Built from scratch Β· No lab Β· No shortcuts

Pinned Loading

  1. FEA FEA Public

    FEA: Free Electron Absorption Architecture

    C++

  2. quatrix quatrix Public

    Quatrix β€” Q-Compass Architecture: novel neural architecture replacing attention with value-based navigation. Base for the Quasar model series.

    Python

  3. GEKO GEKO Public

    Intelligent training framework that automatically skips mastered samples and gives 5Γ— more compute to hard ones. Up to 80% compute savings on LLM fine-tuning.

    Python 1

  4. ANM-Prototype-Logs ANM-Prototype-Logs Public

    Logs and evidence for the ANM-V1.5 prototype: Web-of-Thought engine, parallel R1 workers, verifier output, and system traces executed on a MacBook Air M2.