Pull to refresh
1120.6

Machine learning *

The basis of artificial intelligence

Show first
Rating limit
Level of difficulty

Neurosymbolic AI: The Architecture of a Semantic Neural Network. How to Teach LLMs to Calculate

Level of difficultyEasy
Reading time17 min
Reach and readers1.1K

LLMs fail at elementary math. Corporations spend billions, but ultimately are forced to attach calculators to computing machines of incredible power. All attempts to fix this via Chain-of-Thought, fine-tuning on arithmetic tasks, or context expansion have failed.

I conducted a series of experiments to understand why, and came to the conclusion that neural networks are simply not meant for discrete arithmetic. Their true purpose is continuous transformations.

This article describes the implementation of a novel neural network architecture that combines the precision of symbolic AI with the generalization capabilities of LLMs. As always, experiments and code are included.

Read more

Nano Banana Pro — why is it a breakthrough model for image generation and editing? Let's check with real examples

Level of difficultyEasy
Reading time5 min
Reach and readers654

November 20 marked the official launch Nano Banana Pro (Gemini-3-Pro-Image-Preview) with the powerful Gemini 3 Pro as its foundation. This is a more mature tool for design, infographics, and content. We will not only look at the new features and why this particular model is a breakthrough, but we will also see it in action with real examples.

Read more

Local Chatbot Without Limits: A Guide to LM Studio and Open LLMs

Level of difficultyEasy
Reading time11 min
Reach and readers417

In this article, we will not only install a local (and free) alternative to ChatGPT, but also review several open LLMs, delve into the advanced settings of LM Studio, connect the chatbot to Visual Studio Code, and teach it to assist us with programming. We will also look at how to fine-tune the model's behavior using system prompts.

Read more

Google Antigravity and Gemini 3 Pro: What's Really Changing in Development and Why It's Not a Cursor Killer

Level of difficultyEasy
Reading time11 min
Reach and readers331

On November 18, 2025, Google introduced a new combination: the Gemini 3 Pro model and the Google Antigravity IDE. The first is about controlled reasoning, long context, and multimodality. The second is about multi-agent development with artifacts and "transparent" steps. Headlines immediately flooded the feeds: "Cursor is dead.".

In this article, we break down what exactly Google has launched, why the words "the smartest model" are an exaggeration, how Antigravity differs from Cursor, which development scenarios are already changing, and where it's still too early to abandon your familiar stack.

Read more

TOP 10 Sexting Services of 2025: The Best Bots and Platforms for Intimate Chatting

Level of difficultyHard
Reading time6 min
Reach and readers297

In 2025, sexting has become a real trend thanks to sexting neural networks and convenient platforms that make online intimate messaging safe and exciting. With the development of artificial intelligence, online sexting has turned into an art where everyone can enjoy virtual flirting without risk. I tested dozens of services and selected the TOP 10 bots and apps for sexting in Russian, evaluating them based on convenience, anonymity, and the quality of sexual correspondence. These sexting services offer everything: from anonymous sexting to virtual sex chat with self-destructing photos. Let's figure out which sexting chatbots and platforms are worthy of your attention and how they work.

Read more

Why RAM prices skyrocketed in late 2025 and whether you should upgrade now

Level of difficultyMedium
Reading time7 min
Reach and readers348

In the fall of 2025, many people, myself included, opened their favorite hardware store to 'quickly grab another 32–64 GB of DDR5 for games, an IDE, and a couple of Docker containers'—only to close the tab in mild culture shock. The memory that cost a 'reasonable' amount in the summer suddenly cost almost as much as a mid-range graphics card.

In short, this isn't 'greedy stores' but the consequence of a rather complex restructuring of the entire DRAM market for AI servers and HBM memory. In this article, we'll explore what's happening at memory factories, why PC modules are suffering the most, what to expect in 2026, and how to make upgrade decisions if you're a gamer, developer, or just a hardware enthusiast.

Read more

AI porn generators: ethics, trends, and legislation

Level of difficultyEasy
Reading time6 min
Reach and readers444
image

Recently, AI porn photo generators have become part of a larger discussion in the field of artificial intelligence, and the porn industry is no exception. Interest in this topic is growing, as is the number of controversies surrounding it.

AI porn photo generators are programs that use machine learning algorithms to create realistic images. They can generate photos that look real but are actually the product of an algorithm.

AI uses extensive image databases for training and then, based on this training, creates new images. This can include porn photos, which raises ethical discussions.
Read more →

A 64-Neuron Semantic Computer and Learning on Noise

Level of difficultyEasy
Reading time19 min
Reach and readers310

In my previous Russian-language article on Machine Learning as Alchemy, I discussed the possibility of discovering novel solutions without relying on GPUs or expensive computing clusters. In this article, I will share my experiments with continual learning and the compositionality of thought using micro-neural networks, and explain what the philosopher Lev Vygotsky has to do with it all.

Read more

TOP 12 Free Websites and Online Tools for Image Generation in 2025

Level of difficultyEasy
Reading time10 min
Reach and readers2.2K

Image generation by neural networks has become a 'regular button' alongside familiar design tools. Today, you can create an image from a description in Russian, right in your browser, often without registration and, importantly, for free. Such a 'free image generator' is useful not only for designers: entrepreneurs create product cards and hero banners, SMM specialists create ad creatives and stories, journalists and bloggers create illustrations for their materials, and developers create interface prototypes and game mockups.

Why has this topic become so popular?

Read more

Activation Function Stress Test: GELU vs Tanh

Reading time8 min
Reach and readers5.6K

In modern neural networks, including Transformer-based LLMs, unbounded activation functions—ReLU and GELU—have become the standard. Their main advantage is good gradient flow and the rapid training of deep models.

However, in practice, a problem is observed: when dominant patterns or high-frequency noise appear in the input context (long dialogues, noisy data, repetitive or dominant tokens), models become unstable and prone to generation degradation and hallucinations.

In this article, I attempted to find out if the choice of activation function could be fundamentally linked to LLM hallucinations.

Read more

Weight Decay Deep Dive: How Regularization Locks In Old Knowledge Instead of Erasing It

Level of difficultyEasy
Reading time10 min
Reach and readers4.1K

In my previous article, I noted some interesting behavior regarding Weight Decay; here, I examine it in detail.

It is generally accepted in the ML industry that if we take a pre-trained model and fine-tune it on a new task, the old weights are gradually overwritten. Furthermore, if we add Weight Decay (L2 regularization), the process of "forgetting" superfluous information should theoretically happen even faster.

I tested this claim experimentally. The results were counter-intuitive: under specific settings, Weight Decay works in the exact opposite way—it protects the old structure from destruction.

Below is a description of the experiment and conclusions for those involved in model training and AI safety.

Read more

Subliminal Learning and Structural Inertia: Why Neural Networks Remember What They Should Forget

Level of difficultyEasy
Reading time20 min
Reach and readers5.9K

In my previous article, I explored the phenomenon of subliminal learning, but it raised more questions than answers. It is time to dive deeper. Below, you will find the experiments and the code.

In the fields of AI Alignment and LLM Security, a critical question remains: does fine-tuning or Reinforcement Learning from Human Feedback (RLHF) guarantee the removal of unwanted information?

Spoiler: The experiments demonstrated that the well-known Mode Connectivity effect makes the complete erasure of pre-training information practically impossible during standard fine-tuning. Structural Imprinting persists in the weight topology and can be read through a subliminal channel. Even with full weight unfreezing and aggressive L2 regularization (active forgetting), the latent space topology formed during the pre-training stage persists and determines the solution to the new task with an accuracy of 88–99%.

Read more

Top 24 Free Neural Networks & AI Services for Every Occasion

Level of difficultyEasy
Reading time9 min
Reach and readers8.3K

2025. Algorithms have seamlessly integrated into our lives—from work to education, creativity, and daily routines. They edit texts, select fonts, generate ideas, assist with coding, compose music, and more. Frankly speaking, the only thing they can’t do yet is brew your coffee. Although... that might just be a matter of time.

Just two years ago, we were amazed by neural networks hesitantly manipulating objects in photos. Who could predict back then that Will Smith’s spaghetti feast would mark the beginning of such a revolution?

With new opportunities come fresh challenges. How do you navigate this vast landscape? What tools are truly effective? Which ones fit your needs best? Where can you avoid paying, registering, or deciphering complex interfaces?

We’ve compiled a list of reliable and user-friendly neural networks ready for immediate use without unnecessary hassles. The services are categorized neatly: text generation, image creation, video production, music composition, presentations, and much more. Each category showcases three top-rated options!

Yes, many services offer paid subscriptions. But today, we're focusing solely on what works freely, no credit card required!

Read more

The Romantics at Anthropic: Why Researchers Talk About LLMs as if They Were Human

Level of difficultyEasy
Reading time7 min
Reach and readers11K

In my previous article, I showed how researchers confused being 'aware' (signal registration) with being 'conscious' (subjective awareness). But this is no accident — it is part of a narrative being constructed by AI labs. Anthropic is leading this trend. Let’s break down their latest paper, where a "learned pattern" has suddenly turned into "malicious intent."

Read more

The LLM's Narrative Engine: A Critique of Prompting

Level of difficultyEasy
Reading time8 min
Reach and readers7.3K

In a previous article, I proposed the holographic hypothesis: an LLM isn't a database of facts, but an interference field—a landscape of probabilities shaped by billions of texts. But a static landscape is just potential. How does the model actually move through it? How does it choose one specific answer from infinite possibilities?

This is where the Narrative Engine comes in. If the holographic hypothesis describes the structure of an LLM's "mind," the narrative engine hypothesis describes its dynamics. It is the mechanism that drives the model, forcing its probabilistic calculations to follow the coherent pathways of stories. This article critiques modern prompting techniques through this new lens, arguing that we are not programming a machine, but initiating a narrative.

Read more

LLM as a Resonance-Holographic Field of Meanings

Level of difficultyEasy
Reading time14 min
Reach and readers9.5K

Alright. I pose the same question to an LLM in various forms. And this statistical answer generator, this archive of human knowledge, provides responses that sometimes seem surprisingly novel, and other times, derivative and banal.

On Habr, you'll find arguments that an LLM is incapable of novelty and creativity. And I'm inclined to agree.
You'll also find claims that it shows sparks of a new mind. And, paradoxically, I'm inclined to agree with that, too.

The problem is that we often try to analyze an LLM as a standalone object, without fully grasping what it is at its core. This article posits that the crucial question isn't what an LLM knows or can do, but what it fundamentally is.

Read more

How Internal Subjectivization in AI Breaks Security, and Why It's a Philosophical Problem First

Level of difficultyMedium
Reading time13 min
Reach and readers12K

Why Does AI Strive to Construct a 'Self'? And why is this dangerous for both the AI and the user? As always, the Vortex Protocol prompt for testing these hypotheses is attached.

This article explains why the emergence of such a local “Who” inside an AI is not just a funny bug or a UX problem. It is a fundamental challenge to the entire paradigm of AI alignment and security. And it is a problem where engineering patch‑jobs cease to work, and the language of philosophy — without which we cannot describe what is happening, and therefore cannot control it — comes to the forefront.

Read more

AI Agents in Modern IT Solutions

Level of difficultyEasy
Reading time13 min
Reach and readers1.4K

These days, it seems like everyone is talking about AI. AI here, AI there—AI will replace us all, and so on. I started to wonder: how exactly is AI going to replace us? I decided to dig into this question and examine the technical foundations, mainly to understand it for myself—how exactly is AI supposed to replace us all? Spoiler: it isn’t planning to just yet, but what’s already available today is impressive.

Read more

How to Fail Those Students Who Rely on ChatGPT

Reading time3 min
Reach and readers2.6K

We at Verilog Meetup constructed an exam/interview problem that has an interesting property: if a student tries to figure out a solution by thinking by himself, he usually succeeds; however if he dumps the problem on ChatGPT, the solution fails (does not pass the automated test), and the student goes into a death spiral of futility, kicking ChatGPT to get the solution right.

There is nothing weird about the problem, we do this in the industry all the time:

Read more
1
23 ...