The Daeda AI Automation Suite keeps a local read database for each connected HubSpot portal (via the @daeda/mcp-pro client).
That database is DuckDB.
It lives on your machine.
The local mirror is not one single data source.
It is made from two different systems:
Artifact-backed data
Lightweight plugin-backed data
This distinction is very important.
Usually handled automatically for normal reads
Lightweight plugin-backed data
Not automatically current; refresh it yourself when current values matter
Each connected portal gets its own local folder.
Main writable DuckDB database
Published read replica for safe reads
Local artifact and plugin sync state
Artifact-backed data is the main CRM mirror.
It comes from full exports, stored snapshots, and automated follow-up sync flows.
contacts, companies, deals, tickets, custom object tables, and other enabled object tables
association_schema, workflows
The server decides which artifacts exist for the portal
The client requests the latest artifact inventory
The client downloads missing or changed artifacts
It loads those artifacts into the local DuckDB database
It runs catch-up diff logic and selected-portal watch behaviour
Plugins are small metadata sync jobs.
They populate metadata tables that do not ride on the main artifact flow.
These tables are useful, but they are not treated as live real-time data.
property_definitions, property_groups, property_definition_object_state
pipelines, pipeline_stages
workflow-enrollment-triggers
workflow_enrollment_triggers, workflow_event_types
sequences, sequence_steps
communication-subscriptions
communication_subscription_definitions
conversation_inboxes, conversation_channels, conversation_channel_accounts
provisioning_users, provisioning_teams, provisioning_roles
These plugin names exist in the codebase, but they are currently disabled and not part of the normal active sync surface.
The Automation Suite client does not keep lightweight plugin tables continuously current.
It can auto-skip plugin sync when data looks fresh enough.
The built-in freshness window is 24 hours.
That means plugin tables can be correct enough for many tasks, but still not current enough for operational work.
CRM records and associations
Artifact-backed local mirror
Workflow snapshot plus selected-portal workflow deltas
Artifact-backed workflows table
Lists, owners, pipelines, forms, inboxes, sequences, and similar metadata
Plugin freshness in status(section="schema")
The selected portal gets the strongest freshness behaviour.
Catch-up diff plus live watch behaviour
Updated through the same diff/watch path
Snapshot artifact plus selected-portal workflow delta messages
Still manual refresh when current values matter
Non-selected portals do not get the same live behaviour as the selected portal.
Reads can still use the local mirror.
For enabled portals, the client can wait for artifact queue work to drain before reading.
That helps artifact freshness.
It does not make plugin metadata current.
If portal sync is disabled, the Automation Suite only uses stale local data for that portal.
If no local database exists yet, reads fail.
Use this model:
Is this table a CRM object table, associations, association_schema, or workflows?
Treat it as artifact-backed
Is this table owned by a named lightweight plugin such as lists or owners?
Treat it as plugin-backed and manually refresh when needed
Always use:
This is the authoritative freshness view.
lightweightPlugins.<plugin>.status
Plugin state such as NOT_STARTED, SYNCED, or FAILED
lightweightPlugins.<plugin>.lastRefreshedAt
Replica-backed last refresh time
lightweightPlugins.<plugin>.ageSeconds
Active and recent plugin refresh jobs
Run refresh_plugins before any task that depends on current plugin-backed metadata.
workflow-enrollment-triggers
supported workflow action metadata
communication subscription definitions
communication-subscriptions
provisioning users, teams, and roles
If a task needs several of these, refresh them together in one call.
Call status(section="schema")
Decide which plugin-backed metadata you need
Call refresh_plugins(pluginNames=[...])
Poll status(section="schema") again
Wait until the matching refresh job is COMPLETED
Also confirm the plugin lastRefreshedAt values have advanced
Only then run query, chart, or plan-building work that depends on those tables
COMPLETED does not just mean the remote work finished.
It means the writable client has also published and verified the updated read replica.
That is why status(section="schema") is the real gate.
Job created but not started
Refresh logic is executing
The local read replica is being published and verified
Refresh finished and the replica is verified
Refresh failed; use the error message
Which deals are in closedwon?
Which contacts belong to which companies?
CRM object data plus associations
How many workflows are enabled?
Artifact-backed workflows table
Which owners exist today?
What are the current deal stage IDs?
pipelines is plugin-backed
What is the latest list ID for a static list?
What forms exist right now?
workflows is not a plugin table
property-definitions is a plugin
Refresh it manually when schema freshness matters
Read-only MCP clients can still serve reads
They use the published local replica
Plugin refresh can be relayed
A read-only client can request refresh through the writable master client
Use this rule before every serious query:
status(section="schema"), then query
CRM records plus plugin metadata
status(section="schema"), then refresh_plugins, then wait for replica-verified completion, then query