Jan 11, 2026

Intransient: TweetTracker

Caleb Rudnick, Kira Webb, Roger Arendse, Charl Botha

A chrome extension that shows a rough real time analysis of how much a twitter feed leans left or right and how much manipulative content it contains.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

I have enjoyed the experiment execution here — recording actual humans for authentic data is bold within hackathon's timeframe. Overall, impact AI on political propaganda, formation of echo-chambers etc. is an exciting topic, and I'm glad people are expressing interest in it.

Unfortunately, the results section felt like a letdown — we see graphs on a number of "biased" posts scaling linearly with number of posts viewed, and that models correctly mark Trump and Sanders as right / left biased. This is expected: I would be excited for something more interesting and information-dense. Research questions that first come to mind in relation to exploration of AI impact on social media:

1. Is it true that the AI-generated content is disproportionally right or left wing? If so, what is AI's impact?

2. What is the correlation between user's bias score and other detectable metrics, like positive / negative content, average time spent per post, engagement with posts, "cliqueness" of their feed, etc.

3. What is the evolution of the bias score on a fresh account, what are dynamics of convergence? How actively are algorithms pushing the user towards specific content (biased, negative, topic-locked, etc.)?

I trust that authors will be able to come up with questions infinitely more interesting than these if they choose to continue working in this direction, and I'm genuinely excited to see results.

The project successfully created an Chrome extension designed to visualize manipulation in real-time on X . The tool tracks two primary conditions for every post: a "bias score" placing the content on a Left/Right political spectrum , and a boolean variable indicating if the prompt is "manipulative" . Both classifications are decided dynamically by an LLM, specifically testing Gemini 2.5 variants to label posts as users scroll through their feeds .

To validate the tool, the authors tested the accuracy of their models (Gemini 2.5 Flash-Lite vs. Pro) using the feeds of Donald Trump and Bernie Sanders as proxies for ground truth . While the provided plots indicate that the Pro model is more capable of discerning these political leanings , the test is technically inconclusive. The methodology relies on the strong assumption that every tweet from these public figures falls explicitly on one side of the spectrum , ignoring the possibility of moderate or neutral posts. Additionally, the evaluation would have benefited significantly from a single mean score metric rather than relying solely on visual scatter plots , or ideally, a comparison against human-labeled ground truth data.

The LLM analysis indicates that users are exposed to a significant amount of consistently manipulative content , which the authors argue could adversely impact user intuition through the "illusory truth effect" . However, the definition of what constitutes "manipulative" is not strictly defined in the paper; it is left completely to the LLM to decide based on a predetermined prompt regarding characteristics like emotional exploitation or sycophancy . A ground truth dataset defining specific manipulative characteristics would have been necessary to validate these claims robustly.

While the core idea is interesting , a more robust evaluation would involve a controlled user study—perhaps asking users to assess the topics they encountered while using the tool, rather than relying on the assumption that awareness equates to safety . Though I imagine that relying on X’s algorithm to be consistent for such a study is difficult . There is also potential value in analyzing the intersection of a tweet's "manipulativeness" score and its political leaning, though avoiding this specific metric is understandable given the sensitive political nature of the topic. It could also be nice to integrate a lightweight LLM detector model into the extension and then testing it over a series of feeds on X to gather some data on the presence of bot accounts on the platform.

Cite this work

@misc {

title={

(HckPrj) Intransient: TweetTracker

},

author={

Caleb Rudnick, Kira Webb, Roger Arendse, Charl Botha

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.