No-Code News WK 2 2026
Make.com is in the news for the right reasons. Notion and n8n, not so much. AI hardware is becoming the new no‑code platform, and local desktop agents are on the rise. Join me for W
Make.com Named Top AI Automation Platform and Launches New Features
Make.com is celebrating its recognition as the Best AI Automation Platform for 2026 by HackerNoon, a testament to its growing influence in the no-code space, with over 500,000 organizations using its platform. In its December 2025 release, Make introduced several powerful new features, including a native AI Web Search module that allows users to search the internet directly within their workflows without external APIs. Other notable additions include enhanced scenario run replay and history, the ability for LLMs to build their own tools via MCP Server, and a new Figma integration. The platform also added over 40 new app integrations, including AI tools like Suno and MusicGPT, and various data and social media automation apps.
Why:
It is a really good article, and they do a great job of showing the pros and cons of all these tools so you can make a choice, but also see how many options are out there.
Link:
Best AI Automation Platforms for Building Smarter Workflows in 2026
Zapier Expands AI Capabilities with PerceptivePanda Acquisition
Zapier has made a strategic move to bolster its AI-powered automation capabilities by acquiring PerceptivePanda, an AI-native platform specializing in AI-driven customer research interviews. While the PerceptivePanda platform itself is being wound down, its co-founder, Andre Vanier, will join Zapier as the leader of Orchestration Strategy. This acquisition signals Zapier’s commitment to integrating more sophisticated AI functionalities into its platform, aiming to help customers navigate the evolving landscape of workplace automation.
Why:
“Orchestration Strategy” is an interesting term. This idea of orchestrating is going to be a big part of what we do in 2026.
Link:
PerceptivePanda joins Zapier | Zapier
n8n: A Major Leap Forward and a Critical Security Alert
There are a couple more CVE issues, including CVE-2026-21858, a Python remote code execution vulnerability. As always, it is important to read the details and keep your instances patched.
Why:
With no-code, we still have to patch our tools if they are self‑hosted.
Link:
Notion AI Data Exfiltration
Even fully hosted systems need patching. In this case, hidden prompts were used to trick the AI into doing what it was told by the wrong person. The issue has since been patched.
Why:
There is no going back. We have to use AI to get our work done in new and better ways. But there will be issues, so keep an eye on the news and on the tools you are using.
Link:
Notion AI Unpatched Data Exfiltration – PromptArmor
Tailwind News
This one is tricky. All I can really say is that what was sellable a few years ago might not be anymore. So what’s next? Is it theft?
What I do know is that I started my career building and fixing computers. Within 1–2 years, with the rise of USB, Windows 95, and laptops, I was less needed. Technology puts an end to “old” ways and opens up doors to new ones, we hope. This might be that moment for Tailwind.
FlutterFlow’s AI Future Is DreamFlow. Its AI Present Is This.
This is a good look at how things are going right and wrong with DreamFlow and FlutterFlow.
Why:
It is a reminder that tools built with a “Lego brick” system that you used to need skill to assemble can now put AI on top to assemble those bricks for you. That leads to prompts that produce predictable outcomes. But there are still struggles here. As you watch the video, you will see a really good explanation of what those struggles look like for this user.
Link:
FlutterFlow’s AI future is DreamFlow. Its AI present is this.
Trending on GitHub and Hugging Face
MiroThinker
Open-source research agent model.
Why:
Run local models without the cost or privacy concerns of fully hosted solutions. I will be adding this to my series.
Link:
LTX-2: Efficient Joint Audio-Visual Foundation Model
LTX-2 is an open-source audiovisual diffusion model that generates synchronized video and audio content using a dual-stream transformer architecture with cross-modal attention and classifier-free guidance.
Why:
What will it look like in two years when a video creator has their own local rig, training models like this and outputting video? The cost is going down, and the limitations are going down too. It gets easier and cheaper to make exactly the content you want.
Link:
Lightricks/LTX-2 · Hugging Face
Qwen-Image
Qwen-Image is a text‑to‑image foundational model.
Why:
Another model you can run on your local machine. Train it, tune it, and use it at no additional cost. This is especially fun for people who want to generate images to go with ideas for comics, books, and more.
Link:
Qwen/Qwen-Image-2512 · Hugging Face
NemoTron 3 Nano
A good look at how local LLMs are needing less and less while doing more and more.
Link:
Julie Agentic Desktop and TARS
Julie is a lightweight, transparent desktop AI assistant built to reduce context switching. It lives on top of your workspace, understands what is on your screen, and responds via voice or text without forcing you to switch tabs, copy context, or break focus.
Why:
If this works—or better, once this kind of thing really works—what is the next step for apps? If it can do so many things on the computer for us, it can not only gather news and context, but also build software for clients. It will be interesting to see how we progress to “prompting our day.”
And then there is Bytedance’s entry as well. 🤔
Links:
New in 2026: Where Does No-Code Actually Live?
This is a new section for 2026 as I try to make sense of what is happening here. It is getting harder to separate what we do for no-code apps from what we do to bring all our data and context into one place, with AI tools woven through all of it.
What even counts as an “app” now? What are we “no‑coding”? Where will it live? Will it be the web, your desktop, your wearable, or all of the above?
New Plaud AI NotePin S Adds Recording Button, Desktop App for Online Calls
Why:
It is getting easier to carry this around and use it for meetings and more. And then there are the Motorolas…
Augmented AI Computer Vision
This is the kind of hardware we might be building for in a few years.
Link:
Motorola Is Entering the Wearable AI Game
A wearable without the attitude that Humane had? I am excited about these devices mostly for work. I am not sure if I would use them on the road, but they could be huge for ongoing context and memory of work meetings, research, and more.
Link:
Motorola entering wearable AI – CES 2026 coverage
All Links
https://hackernoon.com/best-ai-automation-platforms-for-building-smarter-workflows-in-2026
https://zapier.com/blog/perceptive-panda-joins-zapier/
https://nvd.nist.gov/vuln/detail/CVE-2026-21858
https://www.promptarmor.com/resources/notion-ai-unpatched-data-exfiltration
https://dev.to/sgardoll/flutterflows-ai-future-is-dreamflow-its-ai-present-is-this-2cf1
https://github.com/MiroMindAI/MiroThinker
https://github.com/Luthiraa/julie
https://github.com/bytedance/UI-TARS-desktop
https://mashable.com/article/ces-2026-motorola-entering-wearable-ai
https://huggingface.co/Lightricks/LTX-2