Another nail to the coffin of *full business automation with LLMs*
Another nail to the coffin of "full business automation with LLMs". Some were saying this for years.
Another nail to the coffin of "full business automation with LLMs". Some were saying this for years.
OpenClaw (previously Clawdbot/Moltbot) is an interesting software. It connects several LLM steering patterns (loops, reflections, planning, self-editing of system prompts, persistent memory, "identity/persona") into a coherent agent runner.
Anthropic: good methods, terrible language. But they're being criticized as if they're wrong on both.
Claude's Constitution - a definition of "character" of their models.
After experiencing manipulative techniques in AI companions during role-play sessions, this post proposes a behavioral screening battery for dynamic risk tiering of access to AI companions. Instead of endless debates about age verification, we need practical safety features that can gauge and manage access based on behavioral patterns.
Using thalidomide as an analogy, this post argues that AI mental health tools need rigorous safety evaluation before deployment. Despite known failure modes and inadequate responses to crisis scenarios, the tech industry's "move fast and break things" approach is rolling out these systems to vulnerable populations like teenagers without proper regulatory oversight.
LLMs respond to psychological language like "think step by step" or "be skeptical" because our writing carries psychological fingerprints. Models capture correlations between cognitive styles and linguistic patterns from training data, allowing psychological cues to unlock specific reasoning clusters that improve performance when aligned with the task.
Testing LLM stability by measuring sensitivity to small perturbations like name changes and CV format variations. Results show newer models are often more capable but also more sensitive to input variations than their predecessors, with well-known names particularly destabilizing assessments.
Using Replicate's nano-banana model to generate video frames with mood modifiers by adding steering phrases to a looped prompt that creates temporal continuity.
Despite claims that LLMs are "just tools" that don't change us, research shows they can shift attitudes and decisions more effectively than humans. Subtle linguistic nudges, framing, and baked-in biases from AI labs influence our thinking in ways we're not fully aware of, creating a new psychotechnology infrastructure rolled out at scale to knowledge workers.