<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Climakers Blog (English)</title>
        <link>https://blog.climakers.com/en/blog</link>
        <description>Operator-grade guides for Confluence migration, docs-as-code, continuity, and AI-ready Markdown workflows.</description>
        <lastBuildDate>Sun, 03 May 2026 09:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>SvelteKit + mdsvex</generator>
        <language>en</language>
        <item>
            <title><![CDATA[--sync vs --incremental: Understanding the Differences and When to Use Each]]></title>
            <link>https://blog.climakers.com/en/blog/sync-vs-incremental-understanding-the-differences-and-when-to-use-each</link>
            <guid isPermaLink="false">https://blog.climakers.com/en/blog/sync-vs-incremental-understanding-the-differences-and-when-to-use-each</guid>
            <pubDate>Sun, 03 May 2026 09:00:00 GMT</pubDate>
            <description><![CDATA[Understand how acs2md uses .convert-sync-state.json, why incremental is the default, and when deleted files should stay or go — including the ISO 27001, NIS 2, and SOC 2 retention questions hiding inside the choice.]]></description>
            <content:encoded><![CDATA[If you only remember one thing about --sync and --incremental, make it this: both modes use the same state file and the same changed-page detection. The real difference is what happens when a page disappears from Confluence. That sounds small, but it changes the meaning of the whole output directory. The short version The official docs summarize the difference cleanly: | Behavior | --sync | --incremental | | ------------------------- | ------------------------------- | ------------------------------------ | | State file | .convert-sync-state.json | .convert-sync-state.json | | Skips unchanged pages | Yes | Yes | | Re-converts changed pages | Yes | Yes | | If source page is deleted | Local file is deleted | Local file is kept | | Summary label | Deleted | Removed from tracking | | Default | No | Yes | | Best fit | Live mirrors, continuity copies | Archives, retention, growing corpora | The docs also note that --incremental is the observed default on the documented release. You do not need to pass it explicitly unless you want the behavior to be obvious in scripts and runbooks. How the state file works When you run space convert with either mode, acs2md: 1. reads .convert-sync-state.json from the output directory 2. fetches metadata for the pages in the space 3. compares page versions against what the state file recorded earlier 4. skips unchanged pages and re-converts changed ones 5. rewrites internal links if link rewriting is enabled 6. updates the state file with the new state That is why later runs can be dramatically cheaper than the first one. When sync is the right decision Use --sync when the local directory should behave like a live mirror: This is the right choice when: - a continuity copy should reflect the current state of the source space - a downstream system expects stale local files to disappear - a static site publish step should not keep pages that no longer exist upstream - disaster recovery procedures depend on an accurate current mirror If a page is deleted in Confluence, --sync deletes the corresponding Markdown file and removes it from the state file. The summary will report Deleted: N. When incremental is the safer decision Use --incremental when preservation matters more than exact mirroring: This is the better fit for: - archive-first exports - compliance and audit retention - historical documentation sets that must survive upstream deletions - RAG or search corpora that should never lose already exported content by accident If a page is deleted in Confluence, --incremental keeps the local Markdown file on disk and only removes the tracking entry. The summary reports Removed from tracking: N. A useful decision rule Ask one question before choosing a mode: Should the local directory represent the current live state of the space, or should it preserve documentation even after the source changes? - Current live state: choose --sync - Preservation after deletion: choose --incremental That is the whole decision. Do not make it harder than it is. Common mistakes to avoid The docs call out several details worth making explicit: - --sync and --incremental are mutually exclusive - --conflict-resolution=versioned disables sync and incremental behavior because the timestamped directory is not a stable place for the state file - changing output directories between runs resets the practical value of state tracking - link rewriting runs after successful conversion when it is enabled In other words, state-tracked exports want a stable directory and a clear lifecycle. Example choices by use case | Use case | Better mode | | ------------------------------------------- | --------------- | | Keep a Git-backed docs mirror current | --sync | | Maintain a continuity copy for recovery | --sync | | Build an archive that never loses documents | --incremental | | Feed a growing RAG corpus | --incremental | | Publish a scheduled static-site source | --sync | | Retain audit evidence after source deletion | --incremental | That mapping is consistent across the customer docs and the guided workflows. The compliance question hiding inside the mode choice --sync and --incremental look like an operational decision. They are also a records-management decision under most modern frameworks. - ISO/IEC 27001:2022 A.5.33 (protection of records) asks you to define how records are retained, protected, and disposed of. --incremental is the natural fit for records that must survive upstream deletion. --sync is appropriate when the local copy is a live mirror, not a record. - NIS 2 Article 21 and the related implementing guidance expect operators of essential and important entities to keep evidence of incidents, configurations, and decisions for the duration required by national authorities. If the documentation is part of that evidence, --incremental is usually the correct mode. - SOC 2 Common Criteria CC7.4 (incident response) assumes you can reconstruct what the system looked like during an incident. Deleting Markdown files because the source page was removed in Confluence can erase exactly the artifact CC7.4 expects you to retain. - GDPR Article 5(1)(e) (storage limitation) points the other direction. Personal data should not live in a continuity copy longer than the lawful purpose requires. If a Confluence page contained personal data and was deleted for that reason, --sync is the safer mode for that estate. In other words, the right mode depends on whether the directory is functioning as a mirror (where deletions should propagate) or as a record (where deletions should be preserved as evidence). Different spaces in the same organization may need different modes for the same reason. Final recommendation Choose --sync when local deletion is a feature. Choose --incremental when local retention is a feature. Both modes give you changed-only refreshes through .convert-sync-state.json. The only question is whether removed Confluence pages should vanish from disk or remain as exported evidence. Decide that once, document it in the runbook, and your later runs become much easier to reason about.]]></content:encoded>
            <category>space-conversion</category>
            <category>acs2md</category>
            <category>sync</category>
            <category>incremental</category>
            <category>continuity</category>
            <category>archive</category>
            <category>retention</category>
            <category>iso-27001</category>
            <category>nis-2</category>
            <category>soc-2</category>
        </item>
        <item>
            <title><![CDATA[Best Practices for Using acs2md in Your Projects]]></title>
            <link>https://blog.climakers.com/en/blog/best-practices-for-using-acs2md-in-your-projects</link>
            <guid isPermaLink="false">https://blog.climakers.com/en/blog/best-practices-for-using-acs2md-in-your-projects</guid>
            <pubDate>Sat, 02 May 2026 09:00:00 GMT</pubDate>
            <description><![CDATA[Use acs2md as an operating workflow, not a one-off command: validate readiness, confirm scope, choose the right output contract, and make debugging routine.]]></description>
            <content:encoded><![CDATA[acs2md works best when teams treat it like an operational system instead of a one-time conversion command. The docs reinforce that repeatedly: validate the machine, discover the target, choose the right mode, and collect useful evidence when something goes wrong. That is what keeps bulk export boring in the best possible way. 1. Run doctor any time the environment changes The utilities docs recommend doctor after license activation, after credential changes, before the first bulk export on a new machine, and before scheduled or customer-facing runs. That is the right baseline: It verifies: - configuration file validity - Confluence credentials - live API connectivity - license presence and validation - machine identity - current version and build metadata If you skip that step, you move environment mistakes into the most expensive part of the workflow. 2. Discover scope before you convert anything The getting-started and workflow docs both push discovery ahead of conversion. Keep that habit. Why it matters: - it confirms you are exporting the right space - it lets stakeholders review the hierarchy before the run - it reduces accidental exports during migrations or customer work When governance matters, extend that discovery surface: That pulls metadata and access context into the same export program. 3. Decide the output contract before picking flags Teams often choose flags one by one. A better approach is to decide what the output directory must represent. Examples: - A live docs mirror: use --sync, keep --rewrite-links, store in a stable directory. - A long-term archive: keep the default incremental behavior or pass --incremental explicitly. - A RAG corpus: use --exclude-marks, often disable embedded images, keep scheduled refreshes predictable. - A migration staging tree: include metadata so identifiers and dates stay visible. The output contract should drive the flags, not the other way around. 4. Keep secrets out of shell history when you can The configuration docs recommend a clean split: - keep non-secret defaults in the config file - inject secrets through environment variables when possible - use CLI flags for per-run overrides and debugging That is especially useful in CI/CD: If you need separate environments on one machine, use a custom --config-file instead of hand-editing the same config back and forth. 5. Choose a stable output directory for recurring jobs State tracking depends on a stable directory because .convert-sync-state.json lives next to the exported Markdown. That leads to two practical rules: - keep the same --output-dir when you want changed-only refreshes - avoid --conflict-resolution=versioned when you expect sync or incremental behavior to keep working, because a timestamped directory is not a stable home for the state file This is a subtle point, but it directly affects whether later runs stay efficient. 6. Make logging and support bundles routine, not exceptional Bulk jobs are easier to trust when they leave evidence. The support bundle includes masked configuration, diagnostics, environment context, and recent logs. That is valuable even inside your own team before support ever gets involved. 7. Separate workflows by purpose The workflows page is useful because it stops pretending one export profile fits every job. Use different commands or destinations for different purposes: - continuity copy: --sync - archive-first retention: --incremental - migration planning: space pages --tree, then space convert --include-metadata - native engineering analysis: space get --format atlas doc format - governance review: space list, space properties, space permissions When the purpose changes, the output contract should change with it. A short operator checklist Before a real run, confirm all of this is true: - doctor passes - the correct space was confirmed with space list and space pages - the mode is explicit when deletion behavior matters - secrets are handled through config or environment variables, not copied into notes - logs are enabled for recurring or consequential runs - the target directory is stable enough for state tracking If even one of those points is unclear, stop and fix it before the export. How these practices map to ISO 27001 and SOC 2 controls Each of the practices above is also a control statement an auditor can verify. If you operate inside an ISO 27001, ISO 27017, NIS 2, or SOC 2 program, this is how the seven practices line up: | Practice | Framework hook | | ------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------- | | doctor before scheduled runs | ISO/IEC 27001:2022 A.5.37 (documented operating procedures), SOC 2 CC7.2 (system monitoring) | | Discovery before conversion | ISO/IEC 27001:2022 A.5.10 (acceptable use), A.5.18 (access rights) | | Output contract decided up front | ISO/IEC 27001:2022 A.5.33 (protection of records), GDPR Art. 5(1)(e) (storage limitation) | | Secrets out of shell history | ISO/IEC 27001:2022 A.8.12 (data leakage prevention), A.5.17 (authentication information) | | Stable output directory for recurring jobs | ISO/IEC 27001:2022 A.8.13 (information backup), SOC 2 A1.2 (backup processes) | | Routine logging and support bundles | ISO/IEC 27001:2022 A.8.15 (logging), A.8.16 (monitoring activities), NIS 2 Article 21(2)(f) (logging and event management) | | Workflows separated by purpose | ISO 9001:2015 clause 4.4 (process approach), ISO/IEC 27001:2022 A.5.30 (ICT readiness for business continuity) | The practical message is that there is no extra "compliance mode" to enable. The defensible workflow and the productive workflow are the same workflow when you choose the right defaults. Final recommendation The best acs2md practice is not a specific flag. It is operational discipline. The teams that get the most value from the tool are the ones that validate the workstation, confirm scope, choose the correct mirror-or-archive contract, and keep enough logging to explain what happened later. That is how a conversion tool becomes a dependable documentation operation.]]></content:encoded>
            <category>space-conversion</category>
            <category>acs2md</category>
            <category>best-practices</category>
            <category>docs-as-code</category>
            <category>governance</category>
            <category>confluence</category>
            <category>iso-27001</category>
            <category>soc-2</category>
            <category>compliance</category>
        </item>
        <item>
            <title><![CDATA[Advanced Features of acs2md: Customization and Integration]]></title>
            <link>https://blog.climakers.com/en/blog/advanced-features-of-acs2md-customization-and-integration</link>
            <guid isPermaLink="false">https://blog.climakers.com/en/blog/advanced-features-of-acs2md-customization-and-integration</guid>
            <pubDate>Fri, 01 May 2026 09:00:00 GMT</pubDate>
            <description><![CDATA[Learn which acs2md flags shape metadata, links, media, and macro rendering, and how to integrate the output into Git, CI, migration engineering, and AI pipelines.]]></description>
            <content:encoded><![CDATA[The main acs2md workflow is intentionally simple: convert a space to Markdown. The advanced value comes from how much you can shape that output for the downstream system that will consume it. That is where the flags stop being cosmetic and start becoming architecture decisions. The first customization layer: content, links, and media The space convert docs group the most important rendering controls into three buckets. Content and metadata - --include-metadata adds YAML front matter with titles, authors, dates, IDs, and status data. - --exclude-marks=true strips inline formatting for plain-text oriented output. - --exclude-marks=false preserves richer inline formatting when the target renderer can use it. That choice changes whether the export is aimed at static publishing, migration review, or text-first AI ingestion. Links By default, acs2md rewrites Confluence URLs to local relative paths. That is ideal for portable Markdown sets and Git-based documentation. Turning it off makes sense only when you still want the local files but need the original Confluence URLs preserved. Images and media The docs are clear here: embedded images increase file size. If the target is a text-first pipeline, disabling image embedding is often the right tradeoff. The second layer: macro and extension rendering acs2md exposes a surprisingly useful set of extension flags for common Confluence constructs: - --ext-render-toc - --ext-render-recently-updated - --ext-render-listlabels - --ext-render-pagetree - --ext-render-children - --ext-render-contributors - --ext-render-page-signatures - --ext-render-qc-properties - --ext-render-task-report - --ext-render-content-report - --ext-resolve-inline-card-titles That flag surface matters when exported content has to remain legible outside Confluence rather than just technically converted. For example: These options help transform Confluence-specific constructs into Markdown that is easier to review, publish, or index. Integration patterns that show up in real projects The customer workflows page makes the downstream use cases explicit. Here is the useful mental model: | Target workflow | Useful acs2md choices | | ------------------------ | ------------------------------------------------------------------------------------ | | Static site or Git docs | --include-metadata, --rewrite-links, stable --output-dir | | RAG or enterprise search | --exclude-marks, --embed-images=false, repeatable scheduled runs | | Migration engineering | space pages --tree, space get --format atlas doc format, then convert when ready | | Governance review | space list, space properties, space permissions --resolve-users | | Scheduled operations | environment variables for secrets, --sync, --log-file | The key point is that integration is not one pattern. The right output contract depends on whether humans, Git, a static renderer, or an indexing pipeline will consume the result. When space get is the better advanced move One of the most useful details in the docs is the distinction between space convert and space get. Use space convert when you want portable Markdown. Use space get when you want raw native payloads first: That is the right move for migration engineering, debugging custom transformation rules, or any case where you want the source representation before committing to Markdown rendering. CI, logging, and operator ergonomics The configuration and utilities docs fill in the automation details: - every config key can be overridden through ACS2MD ... environment variables - --log-file can send logs to a file, stdout, or stderr - --no-progress keeps CI transcripts clean when you do not want progress output - tree --short gives runbooks and onboarding docs a compact command inventory - support creates a masked diagnostic bundle that is actually useful for escalation A practical scheduled run looks like this: That is the difference between a demo command and an operational integration. Wiring advanced flags into a compliance-grade pipeline The same flags that customize the output also produce control evidence when used inside a CI job or a scheduled runner. - --log-file plus --debug give you a deterministic per-run log, which is the artifact ISO/IEC 27001:2022 A.8.15 (logging) and NIS 2 Article 21(2)(f) ask you to retain. Pin the log retention period in your records-control policy and the control is defensible. - --include-metadata and the front-matter rendering options preserve source identity and dates. That makes it possible to satisfy ISO 9001:2015 clause 7.5.3 (control of documented information) for items like author, revision, and supersession. - space get --format atlas doc format produces a native source snapshot. That snapshot, paired with the rendered Markdown, gives SOC 2 CC8.1 (change management) a clean before-and-after artifact. - Running acs2md from CI under a pinned identity, with --no-progress and structured logs, lines up with ISO/IEC 27001:2022 A.8.31 (separation of development, test and production) and A.8.32 (change management) when documentation deliverables flow from a controlled pipeline. In short: the advanced surface is also the audit surface. The flags you would use anyway are the ones that produce the evidence. Final recommendation Do not treat the advanced flag surface as optional complexity. It is the layer that lets the same product feed a Git publishing flow, a compliance archive, a migration engineering review, or a RAG corpus without pretending those are the same job. Start by deciding what the output must do after export. Then choose metadata, links, media, macro rendering, and state-tracking behavior to fit that downstream contract. That is the cleanest way to get useful Markdown instead of merely converted Markdown.]]></content:encoded>
            <category>space-conversion</category>
            <category>acs2md</category>
            <category>integration</category>
            <category>automation</category>
            <category>migration</category>
            <category>markdown</category>
            <category>iso-27001</category>
            <category>soc-2</category>
            <category>compliance</category>
        </item>
        <item>
            <title><![CDATA[Getting Started with acs2md: Installation and Basic Usage]]></title>
            <link>https://blog.climakers.com/en/blog/getting-started-with-acs2md-installation-and-basic-usage</link>
            <guid isPermaLink="false">https://blog.climakers.com/en/blog/getting-started-with-acs2md-installation-and-basic-usage</guid>
            <pubDate>Thu, 30 Apr 2026 12:00:00 GMT</pubDate>
            <description><![CDATA[Install acs2md, configure Confluence access, validate the workstation with doctor, and complete your first bulk space export in the right order.]]></description>
            <content:encoded><![CDATA[The fastest way to get a bad first impression of acs2md is to run a bulk export before the workstation is ready. The official docs avoid that trap on purpose: they walk from installation to diagnostics to discovery before the first real conversion. That order is worth keeping. It saves more time than any clever flag choice later. Before you begin The customer docs call out the prerequisites clearly. For a clean first run you need: - a Confluence Cloud site such as mycompany.atlassian.net - a user with read access to the spaces you plan to export - a valid acs2md license key, or a local offline license file for restricted environments - network access from the workstation to Confluence Cloud - enough local disk space for the first export acs2md does not target Server or Data Center, and released builds are currently for macOS and Linux only. Step 1: install the binary and verify it The docs split installation by platform: - macOS ships as a universal .pkg - Linux ships as an architecture-specific zip archive In both cases the first technical check should be version and path discovery: If those commands behave as expected, the install is good and you know where the default config and license files live. Step 2: create configuration before touching content Use the default config file unless you have a real reason to isolate environments: The docs show three supported ways to configure Confluence access: Config file CLI commands Environment variables The configuration docs are explicit about precedence: built-in defaults, then config file, then environment variables, then CLI flags. That makes it practical to keep durable defaults in the file and override secrets or per-run behavior from automation. Step 3: activate the license and validate readiness Activate once on the target machine: If you are in an air-gapped environment, use --license-file instead of a hosted activation flow. Then run the most important command in the product: doctor checks configuration, credentials, API connectivity, license state, machine identity, and version in one pass. The docs recommend running it immediately after activation, after credential changes, and before scheduled or customer-facing exports. That is the right habit. Step 4: discover the space before converting it Bulk export should start with discovery, not optimism. Use space list to confirm the target set: Then inspect the target space directly: The docs note an easy detail to miss on current releases: space pages uses --format json; the legacy --json flag is not accepted. That matters because discovery output often gets piped into project notes, migration review, or customer signoff. Step 5: run the first real bulk export Once the workstation and scope are both validated, the first real command is simple: The output directory behavior is also important: acs2md appends the space key as a subdirectory, so the export lands in ./docs/TEAMDOCS rather than directly in ./docs. From there, the tool will: - fetch space pages - preserve the hierarchy as nested folders and files - render Markdown output - create .convert-sync-state.json - skip unchanged pages on later runs On the observed live docs, the default behavior is incremental mode when neither --sync nor --incremental is specified. If something fails, use the supported debug path The docs give a clean escalation order: 1. re-run acs2md doctor 2. verify the active config with acs2md config where 3. confirm the license with acs2md license validate 4. confirm scope again with space list and space pages 5. reproduce with debug logging and then generate a support bundle That workflow is stronger than ad hoc debugging because it captures environment, diagnostics, and recent logs in one place. Why these checks double as compliance evidence The setup sequence above is also a quiet evidence-generation flow for ISO 27001 and SOC 2 programs. - acs2md doctor produces a deterministic readiness check that maps directly onto ISO/IEC 27001:2022 A.8.6 (capacity management) and A.5.37 (documented operating procedures) — operators can show that pre-export validation is repeatable, not improvised. - acs2md config where and acs2md license validate produce evidence that ISO/IEC 27001:2022 A.5.18 (access rights) and license-compliance commitments are checked before content leaves the source platform. - The --debug --log-file path produces tamper-evident logs for SOC 2 CC7.2 (system monitoring) and NIS 2 incident-handling evidence. Keep these logs in the same retention scheme as your other operational logs. You do not need to bolt on a second tool to produce this evidence. The setup commands you would run anyway are already the right artifact. Final recommendation The right first acs2md run is not just "install and convert." It is install, verify, configure, diagnose, discover, and only then export. If you follow that order, the first bulk run is usually straightforward. If you skip it, you push setup mistakes into the most expensive part of the workflow. Start with doctor, confirm the space with space pages --tree, and let the first successful export happen only after the machine is obviously ready.]]></content:encoded>
            <category>space-conversion</category>
            <category>acs2md</category>
            <category>installation</category>
            <category>confluence</category>
            <category>onboarding</category>
            <category>bulk-export</category>
            <category>iso-27001</category>
            <category>compliance</category>
        </item>
        <item>
            <title><![CDATA[Introduction to acs2md: What It Is and Why It Is Useful for Developers]]></title>
            <link>https://blog.climakers.com/en/blog/introduction-to-acs2md-what-it-is-and-why-it-is-useful-for-developers</link>
            <guid isPermaLink="false">https://blog.climakers.com/en/blog/introduction-to-acs2md-what-it-is-and-why-it-is-useful-for-developers</guid>
            <pubDate>Thu, 30 Apr 2026 09:00:00 GMT</pubDate>
            <description><![CDATA[acs2md turns whole Confluence spaces into portable Markdown with hierarchy preservation, rewritten links, and repeatable estate-scale workflows that line up with ISO 9001, ISO 27001, ISO 27017, NIS 2, and SOC 2 evidence requirements.]]></description>
            <content:encoded><![CDATA[acs2md is the bulk space converter in the Climakers portfolio. It is built for the moment when a team stops asking, "How do we export this page?" and starts asking, "How do we move or preserve this whole documentation estate without losing control?" That distinction matters. A single-page export tool solves a local problem. A space-scale converter solves migration, continuity, compliance, governance, and AI-ingestion work that has to stay operational over time. acs2md in one sentence acs2md converts an entire Atlassian Confluence Cloud space into Markdown while preserving page hierarchy, rewriting internal links, and tracking future refreshes through .convert-sync-state.json. In practice, that means one tool can help you: - maintain a current continuity copy outside Confluence - stage a docs-as-code migration into Git - build a repeatable Markdown corpus for RAG or enterprise search - inspect permissions, properties, and page inventories before you export anything - keep later runs efficient by skipping pages that did not change Why developers care about it Most teams do not license a space converter because they love conversion. They do it because a larger workflow is blocked. Typical developer-side reasons include: - A static-site migration needs structure, not a pile of flat files. - A Git repository needs local links that still work after export. - A RAG pipeline needs repeatable Markdown rather than a proprietary viewer format. - A continuity plan needs a local copy that can be reviewed, backed up, and audited outside the source platform. - A migration program needs discovery commands before anyone commits to a bulk run. The docs position acs2md as a space-scale tool for continuity, migration payloads, governed exports, and documentation-estate portability. That is a much more useful framing than "bulk export CLI." What acs2md actually covers acs2md is broader than space convert. The operational surface is closer to a small toolkit: | Command area | What it is for | | ------------------------------------------ | -------------------------------------------------------------------- | | space convert | Export a whole space to Markdown | | space get | Download native Confluence payloads as ADF JSON or storage HTML | | space list and space pages | Discover scope before running a consequential export | | space properties and space permissions | Capture governance context alongside content | | page ... subcommands | Inspect or convert a single page without switching to another binary | | doctor, tree, support, completion | Validate, discover, troubleshoot, and automate the tool surface | That command split is useful because real migrations are rarely just one command. Operators need readiness checks, scope validation, and follow-up inspection before and after the export. Why this matters for compliance and continuity programs If your team operates inside a regulated environment, acs2md is also the artifact that turns Confluence content into evidence. Several of the controls your auditors care about ask for the same thing: a current, restorable, inspectable copy of documented information that does not depend on the source platform being available. - ISO 9001:2015 clause 7.5 requires that documented information is controlled, current, and retrievable. A Markdown estate in Git satisfies the "control of documented information" requirement without locking the record inside a vendor portal. - ISO/IEC 27001:2022 A.5.30 (ICT readiness for business continuity) and A.8.13 (information backup) ask for tested recovery and repeatable backups. A scheduled acs2md space convert --sync run is exactly that. - ISO/IEC 27017:2015 CLD.12.1.5 addresses cloud-customer operational security. Read-only GET access to Confluence Cloud, executed from your own automation, is the kind of admin-side control the standard expects. - NIS 2 Article 21(2)(c) requires business continuity, backup management, and crisis management for essential and important entities. A Git-tracked copy gives you the artifact and the audit trail in one move. - SOC 2 Availability (A1.2 and A1.3) asks for backup processes and recovery testing. Tagged commits act as restore points and acs2md doctor produces evidence that the recovery path still works. This is why the docs talk about continuity, governance, and AI ingestion in the same breath: in a regulated documentation estate, those are the same workflow. The moment acs2md becomes the right fit Use acs2md when the question is about a whole documentation estate. That includes: - docs-as-code migrations where hierarchy must survive - continuity copies that need recurring refreshes - archive programs that should outlive Confluence deletions - RAG or search ingestion where Markdown should be regenerated on schedule - governance work where permissions and metadata matter as much as page bodies If the requirement is one exact page, acp2md remains the sharper tool. The acs2md docs say that directly, and the distinction is sensible: space workflows and page workflows are not the same operational problem. Boundaries worth knowing early The current docs make a few guardrails explicit: - acs2md supports Confluence Cloud only - released builds are currently available for macOS and Linux - the tool is read-only by design and uses GET requests against Confluence - long-running jobs show progress on stderr so stdout can stay useful for automation Those are not marketing details. They tell you whether the tool fits your platform, your security model, and your pipeline conventions before you invest time in rollout. A realistic first operator flow The recommended first run is not guesswork. The customer docs and the GitHub manual line up around a predictable sequence: That flow does four important things in order: 1. proves the workstation is ready 2. confirms the target space is the right one 3. shows the hierarchy before conversion 4. produces a portable Markdown tree with local links That is exactly the kind of operator discipline teams need when a conversion job is tied to migration windows, customer reviews, or backup commitments. Why the state file matters more than it looks acs2md uses .convert-sync-state.json to compare current page versions against earlier runs. That lets later exports skip unchanged content and only re-convert what moved. This matters because the tool is not just for one-time migration. It is also designed for repeatable maintenance workflows. That one file is what makes these two modes possible: - --sync for live mirrors that delete local files when source pages disappear - --incremental for archive-style exports that keep local files after upstream deletion Both patterns show up constantly in real documentation programs. Final recommendation Treat acs2md as a control and portability tool for Confluence estates, not just as a converter. If your team needs repeatable space exports, local ownership of documentation, or a Markdown corpus that can feed Git, compliance, or AI workflows, acs2md is the right place to start. Read the overview first, then move directly into the getting-started guide and the sync vs incremental docs. That sequence gets you from product positioning to an actual operator-grade workflow without wasting a run.]]></content:encoded>
            <category>space-conversion</category>
            <category>acs2md</category>
            <category>confluence</category>
            <category>markdown</category>
            <category>docs-as-code</category>
            <category>developer-workflows</category>
            <category>compliance</category>
            <category>iso-27001</category>
            <category>nis-2</category>
        </item>
        <item>
            <title><![CDATA[How to make Confluence content AI-ready before it reaches your RAG pipeline]]></title>
            <link>https://blog.climakers.com/en/blog/how-to-make-confluence-content-ai-ready-before-it-reaches-your-rag-pipeline</link>
            <guid isPermaLink="false">https://blog.climakers.com/en/blog/how-to-make-confluence-content-ai-ready-before-it-reaches-your-rag-pipeline</guid>
            <pubDate>Sat, 25 Apr 2026 18:00:00 GMT</pubDate>
            <description><![CDATA[A practical guide for teams that need Confluence content exported into clean Markdown before chunking, indexing, and retrieval workflows amplify formatting noise — plus how to keep the AI pipeline aligned with ISO 27001, NIS 2, and SOC 2 expectations.]]></description>
            <content:encoded><![CDATA[Most teams do not have an AI problem first. They have a content quality problem that AI systems make more expensive. When Confluence content moves directly into chunking, embedding, and retrieval workflows without cleanup, every formatting defect gets amplified. Broken structure becomes poor chunk boundaries. Noisy exports become noisy retrieval. Bad links and flattened tables become weak answers in front of users. That is why AI-ready content work should start before the RAG pipeline. The export layer has to produce portable Markdown with stable structure, readable code blocks, and predictable paths before indexing begins. Why raw Confluence exports weaken retrieval quality A retrieval stack is only as trustworthy as the documents it ingests. When content arrives with poor structure, the failure modes are predictable: - chunks break across the wrong boundaries because headings are weak or inconsistent - code samples lose clarity and become hard for models to ground on - tables flatten into low-signal text that hurts retrieval quality - internal references point back to Confluence instead of the durable content estate - duplicated or noisy markup pollutes embeddings with irrelevant tokens This is not just a formatting problem. It is a retrieval accuracy problem. What AI-ready Markdown should preserve If the goal is downstream RAG, assistant search, or internal knowledge retrieval, the exported Markdown should preserve the structure humans and systems both rely on. That usually means: - headings that reflect real semantic sections - code blocks that remain fenced and readable - tables that stay understandable enough to summarize or review - stable filenames and directory paths for indexing pipelines - metadata that keeps source identity, locale, and content relationships intact Clean Markdown gives chunkers better inputs. Better inputs usually produce better retrieval. Why page-level and space-level scope both matter for AI preparation Some AI preparation work starts with a single high-value page. Some starts with a whole documentation estate. Use acp2md when the unit of value is one page that needs exact treatment before it enters an AI workflow. Use acs2md when the requirement is to convert and refresh a broader Confluence space so the whole estate can feed indexing, retrieval, or assistant search. The choice is not about file format alone. It is about choosing the right scope so the exported Markdown remains governable. Recommended workflow before content reaches the RAG stack The safest pattern is to validate the environment, confirm source scope, export to portable Markdown, and only then hand the content to chunking and indexing stages. 1. Validate the operator environment Before the first export, check credentials, license state, and the local runtime. This removes avoidable failures before the AI pipeline ever sees the source content. 2. Confirm the real source scope For a single high-value page, confirm the page directly. For a broader knowledge estate, inspect the target space. That keeps ingestion aligned with the real source set instead of assumptions. 3. Export to customer-controlled Markdown first For one page: For a whole space: This is the handoff point where proprietary content becomes durable Markdown under the team’s control. 4. Review the Markdown before chunking Do not feed the export straight into embeddings without inspection. Review whether: 1. headings reflect real sections 2. code blocks stayed intact 3. tables still carry usable meaning 4. links and filenames are stable 5. the output path fits the indexing convention If the Markdown is noisy here, the RAG system will usually be noisy later. 5. Index only the cleaned estate Once the Markdown tree is stable, the indexing or chunking pipeline can treat it as a governed content source instead of a transient export artifact. That separation matters. It keeps the content preparation layer auditable and reusable outside any single AI vendor or retrieval stack. Why Git still matters in AI workflows Teams often talk about embeddings, vector stores, and chunking strategies while skipping the more basic control plane: versioned source content. Git helps AI-ready content workflows because it gives teams: - visible diffs when source knowledge changes - recoverable checkpoints for indexed content revisions - reviewable history for compliance-sensitive sources - a durable bridge between docs operations and ML or search operations If the source Markdown is not governed, the retrieval layer ends up carrying too much ambiguity. Common mistakes before RAG ingestion The same errors show up repeatedly: - indexing raw exports before checking formatting quality - letting broken headings define chunk boundaries - treating code blocks and tables as disposable noise - losing source identity and path stability between refreshes - coupling the knowledge layer too tightly to one downstream tool Most of these are solved earlier than people think. They are export-discipline problems before they become AI problems. AI-ready content is also compliance-ready content A RAG pipeline is not a free pass on documented-information control. The moment Confluence content is embedded in a vector store and surfaced through an assistant, it becomes part of the same documented-information surface your auditors look at. - ISO/IEC 27001:2022 A.5.12 (classification of information) and A.5.13 (labelling of information) apply before content enters retrieval. If a page is restricted, embedding it into an unrestricted index breaks the control. A Markdown estate in Git lets you filter by path or front matter before indexing, instead of asking the vector store to enforce classification it cannot see. - ISO/IEC 27001:2022 A.5.33 (protection of records) and A.8.13 (information backup) still apply to the source corpus. The clean Markdown that feeds the index is the system of record. Treat it as one. - NIS 2 Article 21(2)(d) (supply chain security) is increasingly read as covering AI suppliers. A locally-controlled Markdown corpus is an audit-friendly boundary between your knowledge and a third-party model provider. - SOC 2 Common Criteria CC6.1 (logical access) and CC8.1 (change management) ask for evidence of authorized change to systems that affect security. A Git-tracked corpus gives you that evidence for the retrieval layer's source content, even when the embedding store does not. - ISO 9001:2015 clause 7.1.6 (organizational knowledge) asks organizations to maintain knowledge necessary for the operation of processes. A Markdown corpus that feeds both humans and assistants is one of the cleanest ways to satisfy that clause without duplicating the knowledge base across systems. The takeaway is simple: AI-ready content is also compliance-ready content when it starts as customer-controlled Markdown under version control. When Climakers tooling is the right fit acp2md and acs2md are a good fit when the team wants customer-controlled Markdown before content enters AI, search, or Docs-as-Code workflows. That usually means: - retrieval projects that need clean source artifacts - assistant search systems that depend on stable structure - regulated environments that need audit-friendly source history - content estates that must serve both humans and AI systems The strongest AI pipelines usually start with better source content, not with more elaborate post-processing. Final take If Confluence content is heading into a RAG pipeline, the most important decision may happen before chunking starts. Export the content into clean, structured, customer-controlled Markdown first. That gives retrieval systems a better foundation, keeps the content estate governable, and avoids letting formatting noise become model-facing noise at scale. When you are ready to make Markdown your AI-and-audit substrate, pick an acs2md plan in the store or start with acp2md for a single high-value page.]]></content:encoded>
            <category>ai-content</category>
            <category>confluence</category>
            <category>markdown</category>
            <category>rag</category>
            <category>ai-ready-content</category>
            <category>acp2md</category>
            <category>acs2md</category>
            <category>compliance</category>
            <category>iso-27001</category>
            <category>nis-2</category>
            <category>soc-2</category>
        </item>
        <item>
            <title><![CDATA[How to export one Confluence page to clean Markdown with acp2md]]></title>
            <link>https://blog.climakers.com/en/blog/how-to-export-one-confluence-page-to-clean-markdown-with-acp2md</link>
            <guid isPermaLink="false">https://blog.climakers.com/en/blog/how-to-export-one-confluence-page-to-clean-markdown-with-acp2md</guid>
            <pubDate>Sat, 25 Apr 2026 16:00:00 GMT</pubDate>
            <description><![CDATA[A practical guide for teams that need a single Confluence page exported to portable Markdown with preserved formatting, predictable output, and exact scope control.]]></description>
            <content:encoded><![CDATA[Many Confluence to Markdown projects do not begin at space scale. They begin with one page that matters enough to move carefully. That page might be a compliance procedure, an executive runbook, a regulated SOP, a support playbook, or a knowledge article you need to preserve before the surrounding space is touched. In those cases, broad export is the wrong first move. The safer move is exact scope control. That is where acp2md fits. It is built for page-level work where the unit of value is a single Confluence page and the output needs to be portable, reviewable, and ready for Git or downstream processing. When a single-page export is the right starting point Teams often assume a Confluence migration has to start with an entire documentation estate. That is not always true. Single-page export is usually the better path when: - one page has legal, operational, or audit significance - you are piloting formatting fidelity before a broader rollout - the target is an AI-ready Markdown artifact for one high-value source - a team needs to preserve one document immediately without waiting for a larger migration decision - the surrounding space is noisy, but the page itself is well defined In those cases, page precision is a feature, not a limitation. What a good page export needs to preserve The job is not finished when a .md file exists. The job is finished when the page can still be used outside Confluence. For a page-level workflow, that usually means preserving: - headings and section structure - fenced code blocks that still read cleanly in Git - tables that remain understandable in plain Markdown - important links, references, and lists - a stable file path that can live inside a docs repo or evidence folder The quality bar is operational. If reviewers still need to repair the output before they can use it, the export path is not ready. Why acp2md is the better tool for exact page scope acs2md is the right tool when the unit of work is the space. acp2md is the right tool when one page is the job. That matters because page-level workflows benefit from: - direct targeting by page ID, title, or URL - less noise during pilot migrations - easier inspection of formatting fidelity before larger rollout - cleaner handoff for compliance, legal, or incident-response teams Scope discipline reduces cleanup. That is one of the fastest ways to make Markdown exports more trustworthy. Recommended workflow for an exact page export The safest pattern is to validate the workstation, confirm the exact page, inspect likely formatting complexity, and only then write the Markdown file. 1. Validate the environment first Before touching the page, check that the workstation, license, and credentials are in good shape. That step is cheap, and it prevents avoidable failures later in the flow. 2. Confirm the page you actually mean Page-level workflows break when teams assume they are targeting the right document. That is the quick sanity check that proves the page identity before you export anything. 3. Inspect likely formatting complexity Before conversion, inspect the page for marks and rich formatting that may deserve extra review. This helps teams identify whether the page is simple, formatting-heavy, or likely to need closer validation after export. 4. Export to a stable Markdown path When the output matters, write it directly into the path where the organization expects to keep it. That turns the page into customer-controlled Markdown instead of leaving it inside a proprietary platform boundary. 5. Commit and review the file like governed content If the page is important, the exported Markdown should be reviewed like any other controlled artifact. That creates a clean audit trail and makes the file reusable in broader documentation workflows. How to validate the exported page before handoff Before the team treats the Markdown file as production-ready, inspect it with the same standards you would apply to a manually authored document. Check at least the following: 1. Headings reflect the original page structure. 2. Code blocks are readable and fenced correctly. 3. Tables are still understandable in plain text review. 4. Links still point somewhere meaningful. 5. The output filename and path match how the organization expects to store the document. Page-level export is useful precisely because validation is tractable. Use that advantage. Common mistakes in page-level migration work The usual failures are simple and avoidable. - exporting before confirming the exact page ID or URL - treating a one-page workflow like a full-space migration - writing output to a temporary path that never enters Git or retention storage - assuming formatting fidelity without reading the resulting Markdown - expanding scope too early instead of proving one-page quality first Most teams do not need more tooling at this stage. They need tighter scope and better review discipline. Why single-page export is a natural fit for SOPs and regulated procedures A standard operating procedure, a runbook, or a regulated work instruction is rarely the right place for a sweeping migration. It is the place where you want surgical control: one document, one approval trail, one exported artifact. That is exactly what acp2md produces, and it is why single-page export lines up with several common control requirements: - ISO 9001:2015 clause 7.5.2 (creating and updating documented information) asks for appropriate identification, format, and review. A single-page Markdown file with a stable path and a Git commit gives you all three. - ISO/IEC 27001:2022 A.5.31 (legal, statutory, regulatory and contractual requirements) and A.5.37 (documented operating procedures) ask for documented operating procedures that can be reviewed and updated. A page-level export turns one Confluence procedure into a versioned record without dragging the rest of the space along. - NIS 2 Article 21(2)(b) requires documented incident handling procedures. Exporting an incident-response runbook with acp2md produces a portable artifact that survives the loss of Confluence access during the actual incident. - SOC 2 Common Criteria CC2.2 asks for entity-wide policies that are communicated and updated. Single-page export supports that without forcing a full-space migration first. The point is that one important page often has its own compliance lifecycle. Treating it as one artifact — exported, reviewed, committed — is the lightest workflow that an auditor will accept. When to step up from acp2md to acs2md acp2md is not the wrong tool because it is narrower. It becomes the wrong tool when the operational need is no longer page-specific. Move up to acs2md when: - the target is a full documentation space - continuity copies need to stay current over time - internal links need to be rewritten across a larger estate - the workflow is moving from pilot to governed migration pipeline The cleanest programs usually start small, prove fidelity on one page, and then widen scope deliberately. Final take If one Confluence page is the real unit of risk, export that page with exact control instead of opening a larger migration front than the team needs. With acp2md, teams can create clean Markdown artifacts for compliance, operations, AI, and Docs-as-Code workflows without waiting for a full-space decision. That makes page-level export a practical first move, not a compromise. When you are ready, start with acp2md in the store.]]></content:encoded>
            <category>page-export</category>
            <category>confluence</category>
            <category>markdown</category>
            <category>acp2md</category>
            <category>single-page-export</category>
            <category>compliance</category>
            <category>ai-ready-content</category>
            <category>iso-9001</category>
            <category>iso-27001</category>
            <category>sop</category>
        </item>
        <item>
            <title><![CDATA[How to build Confluence continuity copies that stay current in Git]]></title>
            <link>https://blog.climakers.com/en/blog/how-to-build-confluence-continuity-copies-that-stay-current-in-git</link>
            <guid isPermaLink="false">https://blog.climakers.com/en/blog/how-to-build-confluence-continuity-copies-that-stay-current-in-git</guid>
            <pubDate>Sat, 25 Apr 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[A practical guide for teams that need repeatable Confluence continuity copies in Markdown, with rewritten links, preserved hierarchy, and audit-friendly Git history that maps to ISO 27001, ISO 27017, NIS 2, and SOC 2 controls.]]></description>
            <content:encoded><![CDATA[Confluence continuity work should not begin with panic. It should begin with a repeatable export path that produces customer-controlled Markdown, preserves enough structure to stay useful, and can be refreshed on demand. That is the difference between having a backup artifact somewhere and having a continuity copy that operators can actually use. When a team needs to recover documentation access, respond to an ISO 27001 surveillance audit, evidence a NIS 2 incident timeline, satisfy an SOC 2 availability test, seed a new portal, or hand content to AI pipelines, the copy has to be readable, navigable, and current. Continuity is no longer optional for most operators. NIS 2 makes it an Article 21 cybersecurity risk-management duty for thousands of essential and important entities. ISO/IEC 27001:2022 added control A.5.30 specifically for ICT readiness for business continuity. SOC 2 trust services criteria require evidence of backup design and recovery testing. And ISO 9001 has always required that documented information be controlled, current, and retrievable. A continuity copy in Git is one of the cleanest ways to produce that evidence without adding a new platform to the audit scope. Continuity copies are not the same as backups Many teams say they have a documentation backup when what they really have is an opaque archive or a one-time export no one trusts. A usable continuity copy has different requirements: - it must be readable without returning to Confluence - it must preserve hierarchy well enough to behave like a documentation estate - it must be refreshable without rebuilding the process from scratch - it must be versionable so teams can prove what changed and when - it must be portable enough to feed publishing, retention, and AI workflows Continuity is operational. If the copy cannot be inspected quickly, reviewed in Git, and regenerated on a schedule, it is not solving the continuity problem. What a usable continuity copy has to preserve Teams often focus on file creation and overlook semantic preservation. That is the wrong optimization. For continuity work, the copy should keep: - headings and outline structure - internal links rewritten into the local estate - code blocks in stable fenced Markdown - tables that are still reviewable in Git - images and referenced media with usable paths - enough metadata to support audit, indexing, and downstream automation The reason this matters is simple: continuity events rarely happen at a convenient time. If operators have to re-clean the export before they can use it, the copy is already failing when it matters most. Why Git improves continuity instead of complicating it Some teams treat Git as an engineering convenience rather than a continuity control. That misses the point. Git gives continuity workflows several practical advantages: - every refresh produces a diff instead of an uninspectable blob - teams can prove cadence and change history to auditors or stakeholders - a known-good copy can be restored instantly from a tagged commit - the same Markdown tree can feed static sites, search indexes, and retrieval pipelines Git does not fix a bad export. But when the export is deterministic and portable, Git turns continuity from a manual emergency task into a governed operating process. Recommended workflow for a repeatable continuity estate The safest pattern is to validate the environment, inspect the scope, export to a Git-owned directory, and then refresh on a schedule. 1. Validate the workstation before the first export Start by checking credentials, license state, and workstation health. This is a cheap check that catches broken assumptions before a real continuity run is attempted. 2. Inspect the target space and page tree Do not schedule a sync job against a space you have not inspected. That inventory step makes it easier to confirm what the continuity copy should contain and whether the hierarchy matches what downstream consumers expect. 3. Export into a Git-owned continuity directory When the goal is a reusable estate, the output path should already live inside a repository or a directory intended to be committed. The important part is not only that the export succeeds. It is that the copy lands in a durable place with rewritten local links and a refresh mechanism that can be repeated. 4. Commit the resulting tree and review diffs Once the copy exists, treat it like any other governed content change. That step creates an audit trail and makes it possible to compare one continuity snapshot to the next. 5. Repeat on a schedule instead of waiting for an incident Continuity workflows fail when they depend on memory. A copy that was only generated once six months ago is not a continuity strategy. Use a scheduler, CI job, or internal automation to run the refresh path on a cadence that matches operational risk. The role of stable paths and rewritten links Continuity copies become immediately more useful when local paths behave predictably. If an incident forces a team to work from the copy, they should not discover that every internal reference still points back to Confluence. Local link rewriting matters because it turns the export into a self-contained estate rather than a disconnected file dump. Stable output paths also matter for: - site generation - search indexing - retention workflows - downstream AI ingestion - incident handoff between teams Predictability is part of recoverability. How to validate a continuity copy before you trust it Before declaring the workflow done, inspect the copy the way an operator would inspect it under pressure. Check at least the following: 1. The top-level structure mirrors the expected space hierarchy. 2. Sample internal links resolve inside the exported tree. 3. Code blocks remain readable and language-aware. 4. Tables remain understandable in plain Markdown. 5. Deleted or renamed pages show up as meaningful diffs on the next sync. 6. Media references and important diagrams still resolve. Validation should happen before a real continuity event forces the issue. Mapping continuity copies to ISO, NIS 2, and SOC 2 controls Auditors do not accept "we have a backup somewhere" as evidence. They look for a documented, repeatable, testable process that links a control objective to a concrete artifact. A Git-tracked continuity copy produced by acs2md lines up cleanly with the most common framework requirements: | Framework | Control or clause | What the continuity copy provides | | --------------------------- | -------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | | ISO/IEC 27001:2022 | A.5.30 ICT readiness for business continuity | A current, restorable Markdown estate that can be tested without depending on Confluence availability. | | ISO/IEC 27001:2022 | A.5.33 Protection of records | Versioned Git history proves who changed what, when, and how the record evolved. | | ISO/IEC 27001:2022 | A.8.13 Information backup | Repeatable export schedule, deterministic output, and a state file to detect drift. | | ISO/IEC 27017:2015 | CLD.12.1.5 Administrator's operational security | Read-only GET access against Confluence Cloud, recorded in CI logs, suitable for cloud-customer evidence. | | ISO 9001:2015 | Clause 7.5 Documented information | Customer-controlled Markdown that can be reviewed, distributed, and superseded outside a vendor portal. | | NIS 2 (Directive 2022/2555) | Article 21(2)(c) Business continuity, backup management, crisis management | Refreshable continuity copy with timestamped commits that can support incident reconstruction and the 24/72-hour notification flow. | | SOC 2 (TSC 2017) | A1.2 / A1.3 Availability — backup processes and recovery testing | Tagged commits act as restore points; a periodic --sync job is the documented recovery procedure. | The point is not to claim the tool "makes you compliant". No tool does. The point is that the artifact acs2md produces is the kind of artifact each of these frameworks asks for: an inspectable, dated, restorable copy of the documented information that runs your business. If your security or quality program relies on Confluence as a system of record, a continuity copy is the difference between a clause you can evidence in five minutes and a clause that turns into an audit finding. What teams usually get wrong The most common failure patterns are operational, not theoretical. - generating one export and never refreshing it - keeping only proprietary archives that are hard to inspect - skipping link rewriting and local path validation - failing to store the copy in Git or another auditable system - assuming continuity and migration are the same thing every time Migration and continuity overlap, but the continuity requirement is narrower and more unforgiving: the copy must still be usable when the original system is unavailable or inconvenient. When acs2md is the right fit acs2md is the right tool when the unit of work is the space, not the single page. That usually means: - continuity estates for a whole documentation space - governed backup workflows with repeatable refreshes - Git-native copies used for audit, recovery, or publication - Markdown trees that need rewritten links and preserved hierarchy If the requirement starts with one page, acp2md is usually the better first step. If the requirement is continuity at space scale, acs2md is where the workflow belongs. Final take The point of a continuity copy is not to prove that export happened once. The point is to create a current, inspectable, repeatable Markdown estate that can survive migration pressure, audit pressure, and platform disruption — and that can be shown to an ISO 27001, ISO 27017, NIS 2, or SOC 2 assessor without scrambling. With acs2md, a disciplined export path, and Git-based review, teams can keep Confluence continuity copies current instead of discovering too late that their backup story was only nominal. Compare the acs2md plans in our store when you are ready to make this part of your control evidence.]]></content:encoded>
            <category>continuity</category>
            <category>confluence</category>
            <category>continuity</category>
            <category>backup</category>
            <category>markdown</category>
            <category>acs2md</category>
            <category>audit-ready-content</category>
            <category>iso-27001</category>
            <category>iso-27017</category>
            <category>nis-2</category>
            <category>soc-2</category>
            <category>compliance</category>
        </item>
        <item>
            <title><![CDATA[The ultimate guide to migrating Confluence to Docs-as-Code without losing your formatting]]></title>
            <link>https://blog.climakers.com/en/blog/ultimate-guide-confluence-to-docs-as-code-without-losing-formatting</link>
            <guid isPermaLink="false">https://blog.climakers.com/en/blog/ultimate-guide-confluence-to-docs-as-code-without-losing-formatting</guid>
            <pubDate>Sat, 25 Apr 2026 09:00:00 GMT</pubDate>
            <description><![CDATA[A practical migration guide for teams that need clean Markdown, preserved structure, and a repeatable path from Confluence Cloud into Docs-as-Code workflows that satisfy ISO 9001, ISO 27001, ISO 27017, NIS 2, and SOC 2 evidence requirements.]]></description>
            <content:encoded><![CDATA[Confluence is often where business-critical documentation starts, but it is rarely where modern documentation workflows should end. Teams that want Docs-as-Code usually need more than a one-time export. They need content that can be versioned, reviewed, synchronized, searched, archived, and reused across engineering, support, compliance, and AI workflows. That is where most migrations fail: not at the headline level, but in the details that operators care about once the first export lands on disk. Why native Confluence exports break Docs-as-Code workflows The problem is not that Confluence cannot export content. The problem is that default export paths do not produce a durable, automation-friendly Markdown estate. Common failure modes look like this: - tables flatten badly or become hard to review in Git - code blocks lose clean language-aware formatting - internal page links still point back to Confluence instead of the local documentation tree - hierarchy is lost, so a space no longer behaves like a navigable docs estate - downstream search, static publishing, and RAG ingestion inherit noisy or proprietary output Docs-as-Code is not just a file format choice. It is an operating model. That means the export has to support reviewable diffs, predictable paths, automation, and repeated execution. Start with the right migration scope Before choosing a command, decide whether the job is page-precise or estate-wide. Use acp2md when one page is the real unit of work Use acp2md when the migration task starts from a single page ID, title, or URL and you need exact control. That makes it a good fit for: - high-value compliance pages - pilot migrations for one document - troubleshooting formatting before a wider rollout - building a plain Markdown artifact for legal retention or AI ingestion Use acs2md when the target is a full documentation estate Use acs2md when the requirement is to move a whole Confluence space into portable Markdown while preserving hierarchy, rewriting internal links, and supporting repeatable refreshes. That makes it the right tool for: - large documentation migrations - continuity copies - governed backup workflows - CI and scheduled sync pipelines - Git-based documentation estates that must stay current over time What formatting teams actually need to preserve The real question is not whether Markdown is simpler than Confluence. The real question is whether the conversion preserves the semantics that matter once content leaves Confluence. For a serious Docs-as-Code workflow, that usually means keeping: - headings with a usable document outline - code blocks in fenced Markdown - lists and nested structure - blockquotes and panels in readable form - tables that are still reviewable in Git - images and referenced media - enough metadata to support static site generators or internal pipelines The ac2md tooling is built around Atlassian Document Format rather than around a superficial copy-and-paste transformation. That matters because the source document model carries structure that a downstream Markdown workflow can preserve much more reliably. A practical migration workflow that does not collapse at step two The safest migration path is not export first and clean up later. It is discover first, validate the workstation, confirm the scope, and then export. 1. Validate the environment Both acp2md and acs2md document an operator-first flow built around doctor, configuration, and license validation before the first real export. This catches missing credentials, license issues, and connectivity problems before a migration job is launched against real Confluence content. 2. Confirm the exact scope For page work, confirm the exact page first. For space work, inventory the space before conversion. That sequence matters. It keeps migration planning grounded in the real page tree instead of assumptions about what is in the space. 3. Export to portable Markdown, not to a dead-end artifact When the goal is Docs-as-Code, output should land in a directory structure that can live in Git, be published by a static site, or feed internal processing. For page-level export: For space-scale export: At that point the migration is no longer theoretical. You have customer-controlled Markdown on disk. Where formatting preservation becomes operationally important Formatting is not just visual polish. It directly affects three downstream systems: Git review If headings, tables, and blocks are unstable, diffs become noisy and reviewers stop trusting the export. Static publishing If links are not rewritten and hierarchy is lost, the exported tree does not behave like a documentation site. It behaves like a pile of files. AI and RAG pipelines If the export strips too much structure, chunks become harder to interpret. If the export preserves the right semantic structure, Markdown becomes far more useful for retrieval and grounded generation. This is why Climakers positions these tools as migration, continuity, and AI-ready Markdown tooling rather than as simple exporters. How link rewriting and metadata front matter help Two features matter immediately once content lands in a Docs-as-Code repository. Internal link rewriting For a full-space migration, rewritten internal links are the difference between a self-contained docs estate and an export that still depends on Confluence URLs. Front matter Metadata front matter helps downstream systems keep useful context such as author, dates, status, IDs, and other operational attributes. That becomes useful for: - static site generation - audit and retention workflows - content indexing - migration validation - internal automation Confluence Cloud scope matters The active acp2md and acs2md tools target Confluence Cloud. They do not support Confluence Server or Data Center. That boundary matters because migration planning gets worse when teams assume a tool covers more deployment models than it actually does. Clean product boundaries are part of a migration-safe workflow. Why Docs-as-Code is also a compliance posture A migration from Confluence to Markdown is not just a tooling decision. It is also a documented-information control decision under ISO 9001, ISO 27001, ISO 27017, NIS 2, and SOC 2. The same artifact that makes Docs-as-Code work — a customer-controlled Markdown estate in Git — is the artifact that turns each of these frameworks from a checklist into something you can actually evidence. - ISO 9001:2015 clause 7.5 asks for control over documented information: creation, update, distribution, retrieval, and obsolescence. A Git repository encodes those operations as commits, pull requests, branches, and tags. There is no separate "evidence system" to maintain. - ISO/IEC 27001:2022 treats documented operating procedures (A.5.31) and protection of records (A.5.33) as Annex A controls. When the procedure is a Markdown file in version control, you can produce the entire change history, the approver, and the supersession date with one git log. - ISO/IEC 27017:2015 extends 27001 into cloud territory. Storing the authoritative copy of cloud-customer documentation outside the cloud provider's portal is exactly the kind of customer-side control the standard expects. - NIS 2 requires documented policies on cybersecurity risk management, incident handling, and business continuity. A Markdown estate is the most direct way to satisfy the documentation duty without locking yourself into a single SaaS platform whose availability is itself a risk. - SOC 2 Common Criteria CC2.2 and CC2.3 require entities to communicate documented policies. Customer-controlled Markdown lets you publish the same source-of-truth content to internal portals, external trust pages, and auditors without divergence. Treating Docs-as-Code as a compliance posture also changes how you justify the migration internally. You are not "moving away from Confluence". You are bringing the documented information that runs your business under controls your auditors already expect. A sensible migration checklist for Docs-as-Code teams If you want the shortest path to a controlled migration, use this checklist: 1. Decide whether the job is page-level or space-level. 2. Configure credentials and validate with doctor. 3. Inspect the target page or space before conversion. 4. Export Markdown into a Git-friendly output path. 5. Confirm hierarchy, links, code blocks, tables, and media behavior. 6. Run the export again using --sync or incremental workflows when the goal is continuity, not just one-time migration. 7. Publish or process the Markdown in your static site, portal, or AI pipeline. Final take The best Confluence-to-Docs-as-Code migration is not the one that produces the most files fastest. It is the one that gives your team control, portability, and a repeatable workflow without losing the formatting details that make documentation usable. If the problem starts with one page, use acp2md. If it starts with a whole documentation estate, use acs2md. In both cases, the goal is the same: move from knowledge trapped in a proprietary workspace to customer-controlled Markdown that can survive migration, continuity, publishing, AI reuse, and the next ISO 27001 surveillance audit. When you are ready, start with acp2md for a single page or pick an acs2md plan for the whole space.]]></content:encoded>
            <category>docs-as-code</category>
            <category>confluence</category>
            <category>markdown</category>
            <category>migration</category>
            <category>acp2md</category>
            <category>acs2md</category>
            <category>ai-ready-content</category>
            <category>compliance</category>
            <category>iso-9001</category>
            <category>iso-27001</category>
            <category>nis-2</category>
            <category>soc-2</category>
        </item>
    </channel>
</rss>