{"id":18,"date":"2026-01-13T03:02:00","date_gmt":"2026-01-13T03:02:00","guid":{"rendered":"https:\/\/137.184.37.140\/?p=18"},"modified":"2026-04-13T03:48:22","modified_gmt":"2026-04-13T03:48:22","slug":"building-for-outcomes","status":"publish","type":"post","link":"https:\/\/pptx.wtf\/?p=18","title":{"rendered":"Building for Outcomes"},"content":{"rendered":"\n<p>As a MarTech engineer, what drives you? MarTech engineers are often tasked with building bridges, making connections, pipelines across seemingly disjoint set of tools that do completely disparate things. All these tooling is expected to unlock a new vein of experimentation, speed up processes so you can do more experimentation, better measurement of experiments or simply save costs of experimentation. In theory it all looks good, after all MarTech is marketing enablement. So whenever you ship a new API, integrate a new vendor, add a new event, that should start ringing up the cash registers, right?<\/p>\n\n\n\n<p id=\"b162\">In practice, however, when the math strikes, the reality is rarely so.<\/p>\n\n\n\n<p id=\"8127\">A variety of reasons why they do not check out.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>The solution was not the right solution in the first place<\/li>\n\n\n\n<li>Solution was an MVP and was not meant to scale, i.e. does not allow for all types of experiments that would enable the kind of incremental revenue that this was sold on. This one is the most common.<\/li>\n\n\n\n<li>The discount factor was not counted when the incremental revenue impact was thrown around a lot<\/li>\n\n\n\n<li>The time horizon for the incremental revenue to break in, and the engineering efforts to keep the systems running up until that point, does not really add up.<\/li>\n\n\n\n<li>Running two or more systems concurrently that do similar things,<\/li>\n<\/ol>\n\n\n\n<p id=\"aa08\">While, we can all finger point each other \u2014 PMs, Engineers, Marketing, Data Engineering and so on, the fact that all these problems exist because we speak different languages. There is rarely a common taxonomy, or a value framework that we all subscribe to.<\/p>\n\n\n\n<p id=\"6836\">Enter the <strong>OLED Framework<\/strong>. OLED is a lightweigh framework that expands to Outcomes, Levers, Experiments and Diagnostics. It doesn\u2019t require a new tool, a new process, or a new committee. It simply forces clarity \u2014 the kind of clarity that prevents wasted work, misaligned expectations, and \u201cwe shipped it, but nothing changed\u201d moments.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"9896\">Outcome<\/h2>\n\n\n\n<p id=\"949d\">Define the business result before you define the feature<\/p>\n\n\n\n<p id=\"46e1\">Most teams start with the feature.<br>\u201cWhat if we add this event?\u201d<br>\u201cWhat if we build this workflow?\u201d<br>\u201cWhat if we integrate this tool?\u201d<\/p>\n\n\n\n<p id=\"4acb\">Outcome\u2011oriented teams start with the business result.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Increase conversion by X%<\/li>\n\n\n\n<li>Reduce churn for Y segment by 5%<\/li>\n\n\n\n<li>Improve identity match rate<\/li>\n\n\n\n<li>Increase experiment velocity<\/li>\n\n\n\n<li>Reduce time\u2011to\u2011launch for campaigns<\/li>\n\n\n\n<li>Improve data freshness or reliability<\/li>\n<\/ul>\n\n\n\n<p id=\"053d\">Your outcome should not simply be a vague \u201cincrease incremental revenue by X basis points\u201d. You want to break it down into as simple terms as possible, simply because it forces clarity of thought for everyone.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"1ec3\">Levers<\/h2>\n\n\n\n<p id=\"d7df\">Identify the system levers the feature actually influences. This is where engineers thrive. Every feature touches a lever \u2014 a part of the system that can actually move the outcome.<\/p>\n\n\n\n<p id=\"b3ce\">Examples of levers:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data quality (accuracy, completeness, latency)<\/li>\n\n\n\n<li>Identity resolution (match rate, dedupe rate)<\/li>\n\n\n\n<li>Activation speed (API latency, batch windows)<\/li>\n\n\n\n<li>Experimentation throughput (# of tests per month)<\/li>\n\n\n\n<li>Content velocity (time to generate variants)<\/li>\n\n\n\n<li>Model performance (AUC, precision, recall)<\/li>\n<\/ul>\n\n\n\n<p id=\"81a9\">This step answers the critical question:<\/p>\n\n\n\n<p id=\"58b3\"><strong>\u201cDoes this feature actually have the power to move the outcome we care about?\u201d<\/strong><\/p>\n\n\n\n<p id=\"2f08\">It prevents magical thinking \u2014 the belief that a feature will influence a KPI it has no leverage over.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"6160\">Experiments<\/h2>\n\n\n\n<p id=\"469f\">Define how you\u2019ll validate the impact \u2014 before you ship. Every feature should have a validation plan:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A\/B test<\/li>\n\n\n\n<li>Holdout group<\/li>\n\n\n\n<li>Stealth mode releases<\/li>\n\n\n\n<li>Dogfooding releases<\/li>\n\n\n\n<li>Pre\/Post comparison<\/li>\n\n\n\n<li>Model offline vs online comparison<\/li>\n<\/ul>\n\n\n\n<p id=\"8d34\">This step protects engineering teams from being blamed for \u201cno impact\u201d when the business never set up a way to measure impact. It also protects product teams from shipping features that feel good but do nothing.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"8e54\">Diagnostics<\/h2>\n\n\n\n<p id=\"180e\">Define the signals that tell you&nbsp;<em>why<\/em>&nbsp;it worked or didn\u2019t. How long you want to keep running this until you kill it or build the next phase. This is the most overlooked part of outcome measurement \u2014 and the most important.<\/p>\n\n\n\n<p id=\"4af0\">Diagnostics are the&nbsp;<strong>leading and lagging indicators&nbsp;<\/strong>that tell you whether the feature is behaving as expected.<\/p>\n\n\n\n<p id=\"4a46\">Examples:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Event volume<\/li>\n\n\n\n<li>Metadata completeness<\/li>\n\n\n\n<li>API latency<\/li>\n\n\n\n<li>Error rates<\/li>\n\n\n\n<li>Identity match rate<\/li>\n\n\n\n<li>Model inference time<\/li>\n\n\n\n<li>Data freshness<\/li>\n\n\n\n<li>Enrollment rate<\/li>\n<\/ul>\n\n\n\n<p id=\"4ea2\">Diagnostics answer two essential questions:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Is the feature functioning the way we intended?<\/strong><\/li>\n\n\n\n<li><strong>If the outcome didn\u2019t move, do we know why?<\/strong><\/li>\n<\/ol>\n\n\n\n<p id=\"f7d2\">Without diagnostics, teams end up in circular debates:<\/p>\n\n\n\n<p id=\"e2e2\">\u201cIt didn\u2019t work.\u201d \u201cThe experiment was wrong.\u201d \u201cNo, the data was wrong.\u201d \u201cNo, the KPI was wrong.\u201d We have seen these play out in so many different forums.<\/p>\n\n\n\n<p id=\"d847\">Let&#8217;s run the OLED model through a couple of different examples.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"1231\">Reducing Email Send Latency in the Activation Layer<\/h3>\n\n\n\n<p id=\"27be\"><strong>Primary Outcome:<\/strong><\/p>\n\n\n\n<p id=\"4613\">Increase revenue from triggered lifecycle campaigns (e.g., abandoned cart) by reducing the delay between user action and email send.<\/p>\n\n\n\n<p id=\"726a\"><strong>Secondary Outcomes:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Increase conversion rate of triggered emails<\/li>\n\n\n\n<li>Improve customer experience<\/li>\n\n\n\n<li>Reduce engineering maintenance overhead<\/li>\n\n\n\n<li>Improve deliverability (fewer stale events)<\/li>\n\n\n\n<li>Increase experiment velocity for lifecycle flows<\/li>\n<\/ul>\n\n\n\n<p id=\"b145\"><strong>Levers<\/strong><\/p>\n\n\n\n<p id=\"9fcf\">These are the system levers that actually move the outcome:<\/p>\n\n\n\n<p id=\"f2f0\"><strong>1. Event Freshness<\/strong><\/p>\n\n\n\n<p id=\"821a\">How quickly events (cart_add, checkout_start) reach the activation system.<\/p>\n\n\n\n<p id=\"e339\"><strong>2. Trigger Latency<\/strong><\/p>\n\n\n\n<p id=\"608b\">Time between event ingestion \u2192 trigger evaluation \u2192 ESP send.<\/p>\n\n\n\n<p id=\"3668\"><strong>3. Queue Throughput<\/strong><\/p>\n\n\n\n<p id=\"9847\">Kafka\/Kinesis\/SQS queue depth and processing speed.<\/p>\n\n\n\n<p id=\"9b1f\"><strong>4. ESP API Performance<\/strong><\/p>\n\n\n\n<p id=\"a1a2\">Braze, Iterable, SFMC all have rate limits and throttling.<\/p>\n\n\n\n<p id=\"7af0\"><strong>5. Data Model Completeness<\/strong><\/p>\n\n\n\n<p id=\"4388\">Missing metadata (SKU, price, discount) reduces email relevance.<\/p>\n\n\n\n<p id=\"48f3\"><strong>6. Retry &amp; Error Handling<\/strong><\/p>\n\n\n\n<p id=\"1842\">Dropped events = lost revenue.<\/p>\n\n\n\n<p id=\"c89a\"><strong>E = Experiments<\/strong><\/p>\n\n\n\n<p id=\"267b\">How we validate the impact:<\/p>\n\n\n\n<p id=\"096b\"><strong>1. Latency Bucket Testing<\/strong><\/p>\n\n\n\n<p id=\"ab03\">Group users by latency buckets:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>&lt;1 mins<\/li>\n\n\n\n<li>1\u20135 minutes<\/li>\n\n\n\n<li>5\u201315 minutes<\/li>\n\n\n\n<li>15\u201330 minutes<\/li>\n<\/ul>\n\n\n\n<p id=\"77f3\">Measure conversion rate by bucket.<\/p>\n\n\n\n<p id=\"8984\">This is how companies like Amazon, Walmart, and Shopify optimize lifecycle triggers.<\/p>\n\n\n\n<p id=\"8a29\"><strong>2. A\/B Test: Batch vs. Streaming<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Group A: Batch sends every 30 minutes<\/li>\n\n\n\n<li>Group B: Real\u2011time streaming triggers<\/li>\n<\/ul>\n\n\n\n<p id=\"b9b3\">Measure:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>conversion<\/li>\n\n\n\n<li>revenue per email<\/li>\n\n\n\n<li>unsubscribe rate<\/li>\n\n\n\n<li>deliverability<\/li>\n<\/ul>\n\n\n\n<p id=\"3541\"><strong>3. Before\/After Analysis<\/strong><\/p>\n\n\n\n<p id=\"b353\">Compare:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>average latency<\/li>\n\n\n\n<li>revenue per trigger<\/li>\n\n\n\n<li>event drop rate<\/li>\n\n\n\n<li>ESP throttling events<\/li>\n<\/ul>\n\n\n\n<p id=\"c507\"><strong>4. Shadow Mode<\/strong><\/p>\n\n\n\n<p id=\"bfe7\">Run new pipeline in parallel to validate:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>event counts<\/li>\n\n\n\n<li>trigger accuracy<\/li>\n\n\n\n<li>metadata completeness<\/li>\n<\/ul>\n\n\n\n<p id=\"2444\"><strong>Diagnostics<\/strong><\/p>\n\n\n\n<p id=\"13d7\">These are the signals that tell you&nbsp;<em>why<\/em>&nbsp;latency improved or didn\u2019t.<\/p>\n\n\n\n<p id=\"62db\"><strong>1. End\u2011to\u2011End Latency Breakdown<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>event \u2192 warehouse<\/li>\n\n\n\n<li>warehouse \u2192 CDP<\/li>\n\n\n\n<li>CDP \u2192 ESP<\/li>\n\n\n\n<li>ESP \u2192 inbox<\/li>\n<\/ul>\n\n\n\n<p id=\"2e82\"><strong>2. Queue Depth &amp; Lag<\/strong><\/p>\n\n\n\n<p id=\"fc85\">If Kafka lag &gt; 5 minutes, you have a bottleneck.<\/p>\n\n\n\n<p id=\"2850\"><strong>3. ESP Throttling<\/strong><\/p>\n\n\n\n<p id=\"ca37\">Braze and Iterable both expose throttling metrics.<\/p>\n\n\n\n<p id=\"a45d\"><strong>4. Error Rate<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>dropped events<\/li>\n\n\n\n<li>malformed payloads<\/li>\n\n\n\n<li>schema mismatches<\/li>\n<\/ul>\n\n\n\n<p id=\"2082\"><strong>5. Trigger Enrollment Rate<\/strong><\/p>\n\n\n\n<p id=\"ae8a\">If fewer users enter the flow, something broke.<\/p>\n\n\n\n<p id=\"6d0a\"><strong>6. Deliverability Metrics<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>bounce rate<\/li>\n\n\n\n<li>spam complaints<\/li>\n\n\n\n<li>inbox placement<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"59b3\"><strong>Introducing a New Self-Serve Audience Builder for Marketers<\/strong><\/h3>\n\n\n\n<p id=\"d7c2\"><strong>Primary Outcome:<\/strong><\/p>\n\n\n\n<p id=\"c86f\">Increase campaign velocity and reduce dependency on engineering by enabling marketers to build, test, and activate audiences without SQL or ticket queues.<\/p>\n\n\n\n<p id=\"2ef8\"><strong>Secondary Outcomes:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Increase number of experiments per quarter<\/li>\n\n\n\n<li>Improve segmentation accuracy<\/li>\n\n\n\n<li>Reduce time\u2011to\u2011launch for campaigns<\/li>\n\n\n\n<li>Increase pipeline influenced by targeted audiences<\/li>\n\n\n\n<li>Reduce data engineering backlog by 20\u201340%<\/li>\n<\/ul>\n\n\n\n<p id=\"6da8\"><strong>Levers<\/strong><\/p>\n\n\n\n<p id=\"bced\"><strong>1. Data Accessibility<\/strong><\/p>\n\n\n\n<p id=\"7796\">Marketers can access unified customer data (CRM + product + marketing events) without engineering tickets.<\/p>\n\n\n\n<p id=\"6b85\"><strong>2. Query Generation Accuracy<\/strong><\/p>\n\n\n\n<p id=\"7fa4\">LLM\u2011powered or UI\u2011driven audience logic must translate into correct SQL or API calls.<\/p>\n\n\n\n<p id=\"c239\"><strong>3. Identity Resolution<\/strong><\/p>\n\n\n\n<p id=\"4fa5\">Audience accuracy depends on the underlying identity graph.<\/p>\n\n\n\n<p id=\"27a0\"><strong>4. Activation Speed<\/strong><\/p>\n\n\n\n<p id=\"b752\">How fast audiences sync to channels (email, ads, push, SMS).<\/p>\n\n\n\n<p id=\"c13d\"><strong>5. Governance &amp; Compliance<\/strong><\/p>\n\n\n\n<p id=\"27bb\">Audience builder must enforce consent, suppression, and PII rules.<\/p>\n\n\n\n<p id=\"42ed\"><strong>6. Experimentation Throughput<\/strong><\/p>\n\n\n\n<p id=\"9b81\">More audiences \u2192 more tests \u2192 more learnings.<\/p>\n\n\n\n<p id=\"bf4f\"><strong>Experiments<\/strong><\/p>\n\n\n\n<p id=\"156b\"><strong>1. A\/B Test: Self\u2011Serve vs. Engineer\u2011Built Audiences<\/strong><\/p>\n\n\n\n<p id=\"82e5\">Group A: Marketers use the new builder<\/p>\n\n\n\n<p id=\"21ca\">Group B: Marketers submit tickets to engineering<br>Measure differences in:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>time\u2011to\u2011launch<\/li>\n\n\n\n<li>number of audiences created<\/li>\n\n\n\n<li>number of experiments run<\/li>\n\n\n\n<li>campaign performance<\/li>\n<\/ul>\n\n\n\n<p id=\"79de\"><strong>2. Shadow Mode Query Validation<\/strong><\/p>\n\n\n\n<p id=\"20be\">Run LLM\u2011generated queries in parallel with human\u2011written SQL to compare:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>audience counts<\/li>\n\n\n\n<li>segment membership<\/li>\n\n\n\n<li>logic accuracy<\/li>\n\n\n\n<li>cost to run<\/li>\n<\/ul>\n\n\n\n<p id=\"7f3c\"><strong>3. Pre\/Post Analysis<\/strong><\/p>\n\n\n\n<p id=\"b788\">Compare:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>average campaign launch time<\/li>\n\n\n\n<li>number of active segments<\/li>\n\n\n\n<li>number of experiments per month<\/li>\n\n\n\n<li>engineering ticket volume<\/li>\n<\/ul>\n\n\n\n<p id=\"2652\"><strong>4. Channel\u2011Level Performance Tests<\/strong><\/p>\n\n\n\n<p id=\"f9d2\">Test whether more granular audiences improve:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>open\/click rates<\/li>\n\n\n\n<li>personalization experiments<\/li>\n\n\n\n<li>conversion rates<\/li>\n\n\n\n<li>CAC<\/li>\n\n\n\n<li>ROAS<\/li>\n<\/ul>\n\n\n\n<p id=\"4834\"><strong>Diagnostics<\/strong><\/p>\n\n\n\n<p id=\"088b\"><strong>1. Query Error Rate<\/strong><\/p>\n\n\n\n<p id=\"17ca\">% of audience definitions that fail due to schema issues, missing fields, or invalid logic.<\/p>\n\n\n\n<p id=\"0c4f\"><strong>2. Query Execution Time<\/strong><\/p>\n\n\n\n<p id=\"3371\">Slow queries, using up too much bandwidth, leading to slow activation<\/p>\n\n\n\n<p id=\"7df7\"><strong>3. Query Cost<\/strong><\/p>\n\n\n\n<p id=\"9a1f\">How poorly written, leading to cost to run over an extended period of time.<\/p>\n\n\n\n<p id=\"d9c6\"><strong>4. Audience Size Drift<\/strong><\/p>\n\n\n\n<p id=\"adc3\">Unexpected spikes or drops indicate logic issues.<\/p>\n\n\n\n<p id=\"9c7a\"><strong>5. Adoption Metrics<\/strong><\/p>\n\n\n\n<p id=\"2ac1\">of marketers using the tool<\/p>\n\n\n\n<p id=\"2b0c\">of audiences created<\/p>\n\n\n\n<p id=\"6acd\">of audiences activated<\/p>\n\n\n\n<p id=\"e386\">of channels connected<\/p>\n\n\n\n<p id=\"ba11\"><strong>6. Data Freshness<\/strong><\/p>\n\n\n\n<p id=\"b8fa\">If warehouse \u2192 CDP sync is delayed, audiences are stale.<\/p>\n\n\n\n<p id=\"a8c0\"><strong>7. Governance Violations<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>PII misuse<\/li>\n\n\n\n<li>consent violations<\/li>\n\n\n\n<li>missing suppression logic<\/li>\n<\/ul>\n\n\n\n<p id=\"7537\"><strong>Why OLED Works<\/strong><\/p>\n\n\n\n<p id=\"f432\">O.L.E.D. works because it aligns three worlds that rarely speak the same language: Engineers, Product Managers and Marketing\/Business Leaders. It turns feature delivery into&nbsp;<strong>hypothesis\u2011driven engineering<\/strong>, not ticket\u2011driven engineering. You create a culture where teams don\u2019t just ship \u2014 they learn, iterate, and improve \u2014 together.<\/p>\n\n\n\n<p id=\"1abf\">And in a world where AI is accelerating everything \u2014 good systems and bad systems alike \u2014 clarity of outcomes is no longer optional.<\/p>\n\n\n\n<p id=\"7fc0\"><strong>The Future Belongs to Outcome\u2011Oriented Teams<\/strong><\/p>\n\n\n\n<p id=\"35f3\">The next era of MarTech engineering won\u2019t be defined by who ships the most features. It will be defined by who ships the most&nbsp;<strong>impact<\/strong>.<\/p>\n\n\n\n<p id=\"25b3\">Teams that adopt OLED or a similar version of it, will:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Build less, but achieve more<\/li>\n\n\n\n<li>Move faster, with fewer mistakes<\/li>\n\n\n\n<li>Align engineering, product, and marketing<\/li>\n\n\n\n<li>Deliver outcomes, not artifacts<\/li>\n<\/ul>\n\n\n\n<p id=\"2264\">A feature is only as valuable as the outcome it creates.<\/p>\n\n\n\n<p>[All opinions are my own and have no relation with my employers \u2014 past or present. In a rapidly growing Agentic world, I write about the theme of accountability across different systems \u2014 humans or technology. I use&nbsp;<a href=\"https:\/\/huffl.ai\/\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/huffl.ai<\/a>&nbsp;to structure my thoughts]<a href=\"https:\/\/medium.com\/@mailsuchir?source=post_page---byline--2130e7fb7afa---------------------------------------\"><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>As a MarTech engineer, what drives you? MarTech engineers are often tasked with building bridges, making connections, pipelines across seemingly disjoint set of tools that do completely disparate things. All these tooling is expected to unlock a new vein of experimentation, speed up processes so you can do more experimentation, better measurement of experiments or [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":19,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[],"class_list":["post-18","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-accountability"],"_links":{"self":[{"href":"https:\/\/pptx.wtf\/index.php?rest_route=\/wp\/v2\/posts\/18","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pptx.wtf\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pptx.wtf\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pptx.wtf\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/pptx.wtf\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=18"}],"version-history":[{"count":2,"href":"https:\/\/pptx.wtf\/index.php?rest_route=\/wp\/v2\/posts\/18\/revisions"}],"predecessor-version":[{"id":32,"href":"https:\/\/pptx.wtf\/index.php?rest_route=\/wp\/v2\/posts\/18\/revisions\/32"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/pptx.wtf\/index.php?rest_route=\/wp\/v2\/media\/19"}],"wp:attachment":[{"href":"https:\/\/pptx.wtf\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=18"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pptx.wtf\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=18"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pptx.wtf\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=18"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}