Intelligent Automation Services: Prove It’s Smart Before You Buy
Cut through hype: learn how to evaluate intelligent automation services with practical intelligence tests, capability scorecards, and ROI checks before you buy.

Most “intelligent automation” in the market isn’t intelligent—it’s brittle workflow glue with a nicer price tag. The fastest way to avoid a costly replatform is to test for adaptiveness, not features. If you’re shopping for intelligent automation services, you’ve probably noticed the problem: vendors sound identical, demos look smooth, and nobody volunteers what happens on day 45 when exceptions pile up and policies change.
The core issue is simple. Automation doesn’t fail because teams don’t understand the happy path; it fails because real operations are a parade of edge cases: missing data, contradictory IDs, late approvals, and “we changed the policy” emails that arrive mid-quarter. Rule-based automation can be great when inputs are clean and variability is low—but it tends to shatter under uncertainty.
This guide gives you a buyer-friendly capability assessment framework and scenario-based “intelligence tests” you can run during evaluation. No ML PhD required. We’ll define what “intelligent” actually means in production, map vendor capabilities across five layers, and show you how to stress-test exception handling, change tolerance, evidence trails, and integration reality.
We’re opinionated because we’ve earned the scars. At Buzzi.ai, we build adaptive AI agents and automation that have to work inside real workflows—often in emerging markets with messy data and enterprise constraints. The thesis is straightforward: intelligence is an operational property (learning, robustness, governance), not a label on a deck.
What “Intelligent” Actually Means (and What It Doesn’t)
“Intelligent automation services” is a loaded phrase. In practice, buyers end up comparing three things that vendors often blur together: (1) basic automation that moves work faster, (2) AI components that classify or extract data, and (3) decision-capable systems that can operate under uncertainty without turning your team into full-time bot babysitters.
So we need a clean definition. Intelligent automation is not “we used an LLM” or “there’s AI inside.” It’s a system that can interpret messy inputs, choose actions under constraints, and improve safely over time—while staying governable enough for a real business to trust it.
Basic automation: deterministic speed-ups (useful, but limited)
Basic automation is deterministic: “if X then do Y.” Think classic business process automation, workflow tools, and robotic process automation (RPA) scripts that click around legacy UIs. When your process is stable and inputs are structured, this is incredibly effective.
Where it shines:
- Stable processes with low variance (e.g., onboarding steps, routine status updates)
- Clear inputs/outputs and consistent data fields
- High-volume tasks where speed and consistency matter more than judgment
The failure mode is predictable: exception explosions. Take a procurement flow that auto-approves purchases under $500. If finance changes the threshold to $350 and adds a “supplier risk” check, a rules-only system tends to require rewiring the whole flow. Now add that supplier names arrive in 12 formats across emails and PDFs, and you get a rule snowball.
Intelligent automation: decision-capable systems under uncertainty
Intelligent automation is closer to a junior operator than a macro. It can take incomplete inputs, infer intent, and route work through the right tools and people. The key word is adaptive automation: not “perfect,” but resilient, measurable, and improvable.
In practical terms, intelligence shows up as:
- Robust exception handling: it degrades gracefully and asks for help when needed
- Human-in-the-loop by design: escalations and overrides are a feature, not an apology
- Measurable learning: outcomes feed back into models/logic so exception rates fall over time
Consider customer support triage. Real messages are ambiguous: “refund not received” could be payments, logistics, or fraud. Fields are missing. Policies shift. An intelligent system can interpret the message, pull context from CRM, apply policy constraints, and decide whether to auto-act, request one missing detail, or escalate with a concise summary.
Why buyers get fooled: demos optimize for the happy path
Vendors demo curated data, scripted exceptions, and a process that looks like it was designed for the tool. That’s not deception; it’s incentives. Demos reward smoothness, not robustness.
Month-2 reality is where the difference appears: the pilot worked on 200 clean cases, then collapsed at 2,000 real ones. Suddenly every exception becomes a ticket, every ticket becomes a manual workaround, and your “automation ROI” becomes an argument about what counts as “time saved.” The fix is to evaluate automation the way security teams evaluate systems: adversarially.
For vendor-neutral context on how “intelligent automation” and hyperautomation are framed in industry literature, see Gartner’s overview: https://www.gartner.com/en/information-technology/glossary/hyperautomation. Also useful as a general reference is IBM’s explanation of intelligent automation and typical use cases: https://www.ibm.com/topics/intelligent-automation.
The 5-Layer Capability Model for Intelligent Automation Services
If you want to compare intelligent automation services for enterprises without drowning in feature checklists, you need a model that matches how automation fails in production. We use a five-layer capability model because it maps to the real stack: coordination, understanding, decisioning, learning, and control.
Most vendor “platforms” are strong in one or two layers and weak in the rest. That’s fine—until they sell you a full-stack promise and you discover the missing layer is the one your risk team cares about.
Layer 1 — Orchestration: can it coordinate work across systems?
Workflow orchestration is table stakes. It’s the part that coordinates tasks across systems reliably: APIs, queues, retries, idempotency, timeouts, and audit logs. If orchestration is weak, everything above it becomes fragile, because every failure becomes “please re-run the bot.”
A strong orchestration layer answers operational questions like: Who can change workflows? How do we deploy safely? Can we replay events? What happens when the CRM API rate-limits us? The best systems treat failures as expected conditions, not surprises.
Example: order-to-cash spanning CRM → ERP → ticketing → email. A resilient orchestrator can create the order, update invoice status, open/close tickets, and send notifications while handling partial failure: if ERP is down, it queues and retries; if a duplicate order arrives, it detects and avoids double-charging.
For a vendor-neutral view of orchestration and governance concepts in mainstream automation tooling, Microsoft’s Power Automate governance documentation is a good reference point: https://learn.microsoft.com/en-us/power-platform/admin/powerplatform-admin-center.
Layer 2 — Understanding: can it parse messy inputs reliably?
Layer 2 is where AI-powered automation earns its keep: unstructured inputs like emails, PDFs, forms, chats, and voice transcripts. Your key evaluation question is not “does it extract data?” but “how does it behave when extraction is uncertain?”
You want confidence scoring, structured outputs, and clear paths when inputs are missing or contradictory. If the system can’t say “I’m not sure,” it will confidently do the wrong thing.
Example: an invoice arrives by email with two attachments, inconsistent vendor naming, and a handwritten PO number. A robust understanding layer extracts fields, flags ambiguities, and either requests clarification or routes for human review with the evidence attached.
Layer 3 — Decisions: can it choose actions with explainable logic?
Layer 3 is the decision core. Sometimes this is a formal decision engine with thresholds, policies, and reason codes. Sometimes it’s an LLM prompt plus tool calls. The buyer question is: can you constrain decisions to your policies and explain outcomes?
Good decision automation looks like this:
- Policies expressed as explicit constraints (thresholds, eligibility rules, compliance checks)
- Reason codes and “decision receipts” (inputs → logic → output)
- Clear escalation pathways and human override
Example: credit note approval. You might allow auto-approval under $100 for low-risk customers, require manager review for $100–$500, and require finance + compliance for higher-risk accounts. The decision system should show which threshold triggered, which fields were used, and what evidence supported the risk score.
Layer 4 — Adaptation: can it learn from outcomes in production?
This layer is the difference between “we automated it” and “it got better.” Adaptation depends on feedback loops: approvals, rework, SLA misses, customer sentiment, fraud outcomes, refund reversals—signals that correlate to decision quality.
The practical questions to ask:
- What outcome data do you capture automatically?
- Who labels edge cases, and how is labeling embedded in daily work?
- How do you detect drift, and what’s the rollback plan?
Example: a ticket routing model improves as agents reclassify tickets over weeks. If the system logs misroutes and learns from corrections, exception rates should drop. If instead every misroute becomes a new static rule, you’re back to brittle automation.
Layer 5 — Governance: can it be controlled, audited, and safely scaled?
Governance is where “intelligent automation services” meets reality: access control, audit trails, change approval workflows, model/version governance, incident response, and data boundaries (minimization, retention, redaction). It’s also where you win internal trust.
In regulated processes—financial ops, healthcare admin, insurance claims—governance is not optional. You need to prove what the system did, why it did it, who changed it, and what data it touched.
A useful anchor here is the NIST AI Risk Management Framework, which translates “responsible AI” into implementable controls: https://www.nist.gov/itl/ai-risk-management-framework.
Intelligence Tests You Can Run in a Vendor Evaluation (No ML PhD Required)
Most evaluation processes are backwards. Buyers ask for features, get a feature demo, and then discover brittleness later. Instead, assume demos are optimized for success and run tests designed to expose failure modes—especially exception handling, policy change, evidence trails, and integration reality.
These tests work because they are falsifiable. A vendor can’t hand-wave through them; the system either recovers, adapts, explains itself, or it doesn’t. That’s how to evaluate intelligent automation services like an operator, not a spectator.
Test 1: The exception gauntlet (10 edge cases, 1 hour)
Give every shortlisted vendor the same flow and the same set of deliberately messy cases. You’re not trying to be unfair—you’re trying to be realistic. Score recovery behavior, time-to-fix, and whether humans stay in control.
Here’s a starting edge-case set you can reuse:
- Missing required field (no customer ID, no PO number)
- Conflicting identifiers (email says one ID, attachment says another)
- Duplicate request submitted twice via different channels
- Policy exception (VIP customer / blocked supplier / restricted item)
- Out-of-range value (negative quantity, unusually high refund)
- Ambiguous intent (“cancel order” vs “change address” in the same message)
- Stale data (customer status changed today, cached system still shows old status)
- Attachment mismatch (invoice attached, but body references a different invoice number)
- Timeout from a core system mid-process
- Human-in-the-loop interruption (approver requests more info instead of approving)
What you want is graceful degradation: the system continues when it can, pauses when it must, and escalates with context. What you don’t want is hard failure (“process terminated”) or silent corruption (“completed” but did the wrong thing).
Test 2: The change test (new policy mid-demo)
Mid-evaluation, introduce a policy update. Not a “later roadmap” change—right now. For example: refunds over $X now require 2-step approval plus a standardized reason code, and the SLA changes from 48 hours to 24 hours for a specific segment.
Then watch what happens. Do they adjust a policy layer and redeploy safely? Or do they start editing scripts and rewriting the workflow? This is how you distinguish configurable decision layers from brittle glue.
Policy change is not an edge case. It’s the normal operating condition of a growing business.
Test 3: The evidence test (why did it do that?)
Require a “decision receipt” for every automated outcome. If the system can’t explain, you can’t govern. In practice, that means reason codes, source-of-truth references, and an audit trail.
A basic decision receipt should include:
- Inputs used (fields, extracted data, timestamps)
- Relevant policy/rule versions (or decision logic references)
- Model confidence and thresholds (when models are used)
- Tools called and results (CRM lookup, ERP update, email sent)
- Next action + escalation status
If the vendor uses LLM components, ask for tool-call logs, guardrail triggers, and citations for any retrieved content. “The model decided” is not a reason code.
Test 4: The integration reality test (your systems, not theirs)
Integration is where automation budgets go to die. Evaluate connectors, API robustness, and failure handling (retries, dead-letter queues, backoff, observability). Ask for a sandbox run against your staging environment, not their demo stack.
Give your IT team a checklist like:
- Authentication: OAuth/service accounts, secret management, rotation
- Rate limits: handling 429s, backoff strategy, batching
- Reliability: retries, idempotency keys, duplicate prevention
- Observability: logs, traces, metrics, alerting, run replay
- Change management: versioning, approvals, rollback
The hidden tax of enterprise automation is integration brittleness. If you accept it during procurement, you’ll pay for it forever.
Red Flags: When “Intelligent Automation” Is Just Rebranded RPA
RPA isn’t bad. It’s often the fastest way to bridge UI-only systems. The problem is when “intelligent automation services” are effectively RPA scripts with an “AI inside” badge—especially if the vendor’s business model depends on you needing constant maintenance.
The feature-list trap: ‘AI inside’ with no measurable outcomes
If success metrics are fuzzy—“time saved,” “fewer clicks,” “better experience”—you’ll have trouble proving automation ROI. Ask for a measurement plan: baseline metrics, post-launch tracking, and who owns the dashboard.
Good vendors can tell you how they will measure:
- Cycle time distribution (not just averages)
- Exception rate over time
- Rework volume and root causes
- SLA adherence and customer outcomes
If the vendor avoids discussing failure modes, they’re selling a demo, not an operating system.
Rules everywhere: if every exception becomes a new rule, it won’t scale
Rule sprawl is a proxy for low adaptiveness. In month 1, the system handles 80% automatically. In month 6, exception volume doubles, and the “solution” is 200 new rules. Now your process is “automated,” but only because you built a second process: rule maintenance.
Ask vendors about rule growth curves and change cadence. If the answer is “our team will handle it,” you’re looking at a services annuity disguised as intelligence.
No governance story: ‘trust us’ is not a control model
Missing audit logs, weak access control, no approval workflows for changes, and no clear human-in-the-loop options are deal-breakers in enterprise automation. A mature governance story covers who can do what, who can change what, and what gets recorded.
A governance checklist you can paste into an RFP:
- Workflow versioning + rollback
- Model/version governance (if ML is used)
- Audit logs for actions and data access
- Role-based access control and separation of duties
- Incident response process and timelines
- Data retention and redaction controls
How Adaptive Decision-Making Works in Practice (Architecture Without the Jargon)
Buyers often get stuck in a false choice: either a rigid decision tree or a magical black box. Real systems blend reliable orchestration with flexible understanding and a decision layer that is constrained by policy, audited by logs, and improved by feedback loops.
A practical stack: orchestration + tools + decision layer + learning loop
Here’s the plain-English architecture for intelligent automation services with adaptive decision-making:
- Orchestration coordinates steps and handles failures (queues, retries, audits).
- Tools are the actions the system can take: call APIs, update CRM, create tickets, send emails, trigger RPA when no API exists.
- Decision layer chooses which tool to use and when, based on policy constraints and context.
- Learning loop captures outcomes and uses them to reduce future exceptions.
Walkthrough: a support ticket arrives. The system classifies intent, fetches customer and order context from CRM/ERP, decides the next action (auto-refund, request missing info, or escalate), executes the tool call, logs a decision receipt, and then uses the final outcome (approved/denied/reopened) as feedback.
Where does “RPA and AI” fit? Use RPA only for UI gaps—when APIs don’t exist. Prefer tool-based integrations because they’re more observable, more reliable, and easier to govern.
Human-in-the-loop isn’t a failure; it’s the safety system
Human-in-the-loop design is the difference between scalable automation and chaos. You want the system to operate autonomously when risk is low and confidence is high, and to ask for help when stakes are high or uncertainty is real.
A simple risk vs autonomy matrix:
- Low risk (status updates, routing): allow full automation with monitoring.
- Medium risk (refunds under a threshold, credit notes): allow automation with sampled review.
- High risk (compliance decisions, sensitive data, large payouts): require approval and strong audit trails.
The goal is not to eliminate humans; it’s to use humans to generate training signals and handle the genuinely hard cases, not to babysit bots that can’t admit uncertainty.
Measuring improvement: the ‘decision quality’ metrics leaders miss
Accuracy is not enough. The business cares about rework, cycle time variance, SLA adherence, and customer outcomes. A learning system should reduce exception rates over time; if exceptions stay flat, you bought automation, not intelligence.
Example KPI set (claims processing or returns management):
- Median and 90th percentile cycle time
- Exception rate (and top exception drivers)
- Reopen/rework rate
- SLA compliance by segment
- Cost per case (including human review time)
- Customer satisfaction proxy (CSAT, repeat contacts, sentiment)
For a pragmatic view of production pitfalls—iteration, feedback, and drift—Google’s “Rules of ML” remains one of the best references: https://developers.google.com/machine-learning/guides/rules-of-ml.
A Buyer’s Scorecard: Compare Intelligent Automation Services Providers
“Best intelligent automation services provider” is not a universal label; it’s a function of fit. Your scorecard should reward the capabilities that reduce long-term cost of change: exception handling, adaptiveness, governance, and integration reliability. It should punish glossy templates that look good but collapse under variance.
Scoring dimensions (what to weight, what to ignore)
Use a 1–5 rubric per dimension. Weight what hurts you in production; de-weight what looks good in a demo.
Weight heavily:
- Exception handling quality (graceful degradation, escalation design)
- Adaptability (policy changes without rewiring)
- Governance (auditability, access control, change management)
- Integration reliability (retries, idempotency, observability)
- Total cost of change (time-to-update, ownership model)
De-weight:
- Number of connectors on a slide
- Generic “AI features” without evidence
- Prebuilt templates that don’t match your variance
Scoring guidance (example): a “5” in exception handling means the system detects uncertainty, routes to humans with full context, and recovers without manual rework. A “1” means it fails hard or silently corrupts outcomes.
Commercial questions that reveal incentives
Commercial terms often reveal more than technical claims. Here are 10 procurement-friendly questions and what good answers sound like:
- Who owns failures? Good: clear incident response SLAs and shared runbooks.
- What’s the support model? Good: defined escalation path, on-call options for critical workflows.
- How do you price iteration? Good: pricing that doesn’t punish frequent policy updates.
- What happens if we stop? Good: data export, workflow artifacts you can retain.
- Can we self-manage workflows? Good: role-based controls and safe deployment tooling.
- How do you handle upgrades? Good: backward compatibility and staged rollouts.
- Where does our data go? Good: clear data boundaries, retention controls, optional isolation.
- How are models/versioned components governed? Good: versioning, audit logs, rollback.
- How do you prove ROI? Good: baseline, measurement plan, decision-quality metrics.
- What is excluded? Good: explicit limits, not vague “it depends.”
The goal is to avoid contracts where the vendor profits from complexity and you pay for every change request.
Pilot design: prove intelligence in 30 days
A good pilot is not “automate a trivial task.” It’s to prove adaptiveness in a controlled scope. Pick one process with real variance, real stakes, and measurable outcomes—like ticket triage, invoice exceptions, or refund decisions.
A 30-day pilot plan:
- Week 1: baseline metrics + edge-case suite + governance requirements.
- Week 2: integration in staging + run exception gauntlet + implement decision receipts.
- Week 3: supervised run in production (human-in-the-loop), capture outcomes.
- Week 4: measure decision quality + iterate + decide: scale, revise, or stop.
Process mining can help you pick the right pilot by exposing where variance and rework actually occur. Celonis provides useful vendor-neutral resources on process intelligence: https://www.celonis.com/process-mining/.
Where Buzzi.ai Fits: Verifiably Adaptive Automation (Not “AI Sprinkles”)
We built Buzzi.ai around a simple premise: workflows are the product. Models are components. If you start with generic bots and then try to bolt them into operations, you end up with fragile automation and angry stakeholders.
How Buzzi.ai approaches intelligence: workflows first, then models
We start by mapping the process constraints and decision points: where uncertainty appears, where policy applies, and where outcomes can be measured. Then we instrument the workflow so it captures learning signals—approvals, rework, SLA misses—without adding a labeling tax to your team.
Because we design for production, we prioritize observability, retries, and auditability from day one. That’s how AI-powered automation becomes dependable, not merely impressive.
Integration and channels: meet users where work happens
Automation fails when it forces people to change where work happens. Our AI agents integrate with the systems you already use—CRMs, ticketing, ERPs—and we’re particularly strong in WhatsApp and voice interfaces for emerging markets where that’s the default customer channel.
For example, a WhatsApp-based customer intake can capture intent, validate identity, fetch context from CRM, create/update a ticket, and escalate edge cases to a human with the full transcript and decision receipt. Tool use is controlled, and guardrails are explicit, so you get speed without losing control.
If you want the technical underpinning for adaptive decision workflows, see our AI agent development for adaptive decision workflows page.
Proof over promises: the artifacts we deliver in evaluations
When buyers evaluate intelligent automation services, they should leave with artifacts, not vibes. In discovery and pilot work, we typically deliver:
- A capability scorecard mapped to the 5-layer model
- An edge-case suite (your exceptions, not generic ones)
- Decision receipts and audit trail design
- A governance checklist aligned to your risk posture
- An outcome measurement plan and ROI model before scale
You’ll get a clear handoff: what’s automated, what’s supervised, what stays manual, and why.
Conclusion: Buy Intelligence Like an Operator, Not a Tourist
“Intelligent” isn’t branding; it’s how a system behaves when the world gets messy. The best intelligent automation services are robust under exceptions, adaptable under policy change, transparent enough to govern, and designed to improve as outcomes accumulate.
The fastest way to evaluate vendors is to run adversarial tests: the exception gauntlet, the change test, the evidence test, and the integration reality test. Combine that with a capability model spanning orchestration, understanding, decisions, adaptation, and governance—and red flags like rule sprawl and missing auditability become obvious early.
If you’re evaluating intelligent automation services this quarter, ask us for a capability assessment workshop. We’ll run the intelligence tests, produce a scorecard, and define a low-risk pilot with measurable outcomes. Start with our workflow and process automation services as the practical entry point for measurable automation outcomes with governance and integration done right.
FAQ
What is the difference between intelligent automation services and basic automation?
Basic automation is primarily deterministic: it follows predefined rules and workflows to move work faster when inputs are predictable. Intelligent automation services go further by handling uncertainty—messy inputs, missing data, ambiguous requests—and choosing actions based on context and policy constraints.
In other words, basic automation optimizes the happy path, while intelligent automation is judged by how it behaves on the unhappy paths. The more variance you have, the more you should value adaptiveness and governance over raw task speed.
How can I tell if an intelligent automation service is truly intelligent or just rules-based?
Ask for failure behavior, not features. Run an exception gauntlet with deliberately messy cases and see whether the system degrades gracefully, escalates correctly, and keeps humans in control.
Then run a change test: introduce a policy update mid-demo. If every new exception becomes a new hard-coded rule or script rewrite, you’re looking at rule-based automation with a smarter interface, not adaptive automation.
What capabilities should intelligent automation services have beyond task automation?
Beyond moving tasks, intelligent automation should provide understanding (unstructured data extraction with confidence), decision automation (policy-constrained actions with reason codes), and adaptation (feedback loops that reduce exception rates over time).
Enterprises also need governance: audit trails, access control, change management, and model/version governance. Without these, you may automate work but increase operational risk.
What tests can I run to validate exception handling and adaptiveness?
Start with four practical tests: (1) exception gauntlet (10 edge cases), (2) change test (new policy mid-evaluation), (3) evidence test (decision receipts and audit trails), and (4) integration reality test (your staging systems, not vendor demos).
Score vendors on recovery behavior, time-to-fix, explainability, and whether the solution requires constant rule maintenance. These tests create a shared reality between ops, IT, and procurement.
How do intelligent automation services combine RPA and AI safely?
Use RPA as a last-mile bridge when APIs don’t exist, and use AI for understanding and decisioning where inputs are messy. Safety comes from constraints: policy checks, role-based access, human-in-the-loop approvals for high-risk actions, and strong observability.
In practice, “safe” also means replayability and auditability: you can see what happened, why it happened, and how to stop it when needed.
What governance controls should enterprise intelligent automation include?
At minimum: audit logs of actions and data access, workflow versioning and rollback, role-based access control, and an approval workflow for changes. If ML or LLM components are involved, add model/version governance, drift monitoring, and incident response runbooks.
If you want a practical starting point that ties governance to measurable outcomes, our workflow and process automation services approach includes governance and integration as first-class requirements, not afterthoughts.
How should I score and compare intelligent automation services providers?
Use a weighted scorecard that prioritizes exception handling, adaptability to policy change, governance strength, integration reliability, and total cost of change. De-emphasize connector counts and generic “AI feature” lists.
Also evaluate incentives through commercial terms: pricing that punishes iteration is a red flag, because learning loops and policy evolution are normal. A strong provider makes change safe and routine.
Which processes benefit most from adaptive, decision-capable automation?
Look for processes with high volume and high variance: support triage, invoice exceptions, returns/refunds, onboarding with document checks, and operations routing decisions. These workflows usually have enough repetition to justify automation and enough messiness to require intelligence.
If a process is perfectly stable and structured, rule-based automation may be sufficient—and cheaper. Intelligence pays off when uncertainty is the tax you’re already paying.
How do I measure ROI for intelligent automation versus basic automation?
Measure outcomes, not anecdotes. Start with a baseline: cycle time distribution, exception rate, rework volume, SLA compliance, and cost per case. After deployment, compare changes in those metrics, including the cost of maintenance and change requests.
Intelligent automation should show improvement over time—especially reduced exceptions and rework—as learning loops kick in. If ROI depends on constant manual intervention, the “intelligence” is sitting in your team, not the system.
What does a 30-day pilot for intelligent automation services look like?
A good 30-day pilot focuses on proving adaptiveness under real variance. In week one, you define the edge cases, governance requirements, and baseline metrics. In week two, you integrate with staging and run the exception and change tests.
Weeks three and four run a supervised production trial with decision receipts and clear escalation paths, then measure decision quality and ROI. The pilot ends with an explicit choice: scale, revise, or stop—based on data, not vibes.


