<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Samuel Adebayo | Research Fellow at QUB, Researcher in Machine Learning and Computer Vision]]></title><description><![CDATA[Samuel Adebayo: Research Fellow at Queen's University Belfast]]></description><link>https://samueladebayo.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 08:21:36 GMT</lastBuildDate><atom:link href="https://samueladebayo.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Reward Signal you live by]]></title><description><![CDATA[I first heard of the contextual bandit algorithm a couple of years back as an undergrad. I never gave much thought to it. Recently, I started working on reinforcement learning for the thrill of picking up my HRI research again and shaking off the dus...]]></description><link>https://samueladebayo.com/the-reward-signal-you-live-by</link><guid isPermaLink="true">https://samueladebayo.com/the-reward-signal-you-live-by</guid><category><![CDATA[contextual-bandits]]></category><category><![CDATA[Reinforcement Learning]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[decision making]]></category><category><![CDATA[habits]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Mon, 16 Feb 2026 20:05:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771271846807/e91bdcd7-2853-4e50-803d-ff14b14aaadc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I first heard of the contextual bandit algorithm a couple of years back as an undergrad. I never gave much thought to it. Recently, I started working on reinforcement learning for the thrill of picking up my HRI research again and shaking off the dusty old project feeling. So, I became enthralled, not for the reason you would think (not to sound unserious), it wasn’t the maths that grabbed me first, but the irony of how our day-to-day lives have started to look like a living demo of what his algorithm quietly assumes. Like in a virtual world, not the sci-fi kind, just like the one we have built with screens, pings, feeds, and tiny choices that would somehow add up to a whole personality.</p>
<p>For those who don’t know what a contextual bandit is, here is the layman’s version: you are in a situation, you have a few options, and the best option depends on the current context, the time, the mood, and perhaps, in this case, the setting. You pick one option, and you only get feedback on the one you picked. So, no reply. Not even a counterfactual like ”what if I had chosen differently? ” At every single turn, you get a chance! Just context, action, reward! However, over time, the algorithm learns what tends to work in what kind of moment. And that’s where it starts to feel familiar, because life is basically that. We make choices with partial feedback, and we learn patterns from whatever results shout the loudest — you get model weights (wink wink, ML folks).</p>
<p>Surprisingly, the part I can’t shake off is that, based on what we know about the bandit algorithms, a bandit will optimise the reward signal it’s given. So, if the signal is shallow, it can become extremely good at shallow outcomes, and it won’t even know it is missing anything. Thus, if my reward vis-à-vis gratification is quick relief, being noticed, staying busy, or avoiding silence, then my habits train toward those things with frightening efficiency. Truth is, I wouldn’t need a villain, just hard, cold repetition. By extension, I have come to the realisation to start to think about reward like something I would rather choose on purpose, not something I inherit by default. Because, just like model inference, some rewards are noisy and immediate, quick and dirty, and some are quiet and slow, the quiet ones tend to be the ones that make a life feel like it’s actually there.</p>
<p>I hope that you make good choices every day. Truly.</p>
<p>SAO</p>
]]></content:encoded></item><item><title><![CDATA[EU AI Act: Evidence as a Release Artefact]]></title><description><![CDATA[Disclaimer (read first): This article is informational only and not legal advice. The EU AI Act is a legal text; how it applies depends on your role (provider/deployer/importer/distributor), your system’s classification (high-risk, transparency-risk,...]]></description><link>https://samueladebayo.com/eu-ai-act-evidence-as-a-release-artefact</link><guid isPermaLink="true">https://samueladebayo.com/eu-ai-act-evidence-as-a-release-artefact</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[eu ai act]]></category><category><![CDATA[AI Systems]]></category><category><![CDATA[llm]]></category><category><![CDATA[mlops]]></category><category><![CDATA[AI Governance]]></category><category><![CDATA[responsible AI]]></category><category><![CDATA[Responsible AI Practices]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Tue, 27 Jan 2026 01:54:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769477777690/e0547878-f048-43c1-8efe-c029b50e3a1f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p><strong>Disclaimer (read first):</strong> This article is informational only and <strong>not legal advice</strong>. The EU AI Act is a legal text; how it applies depends on your role (provider/deployer/importer/distributor), your system’s classification (high-risk, transparency-risk, prohibited, etc.), and your deployment context. Always consult qualified legal counsel for compliance decisions.</p>
</blockquote>
<p>The EU AI Act (formally <em>Regulation (EU) 2024/1689</em>) is Europe’s landmark AI regulation (<a target="_blank" href="https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng">official text</a>). It uses a risk-based framework: obligations scale with the potential impact of the system, from prohibited practices (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-5">Article 5</a>) to high-risk systems (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-6">Article 6</a>, <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-1">Annex I</a>, <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3">Annex III</a>) and transparency obligations (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50">Article 50</a>).</p>
<p>In practice, teams often think in four buckets:</p>
<ul>
<li><p><strong>Prohibited practices</strong> — not allowed to deploy (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-5">Article 5</a>).</p>
</li>
<li><p><strong>High-risk systems</strong> — allowed, but subject to requirements such as technical documentation (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-11">Article 11</a> and <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV</a>), record-keeping (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-12">Article 12</a>), and post-market monitoring (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a>).</p>
</li>
<li><p><strong>Transparency obligations</strong> — certain systems must disclose AI use or label synthetic content (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50">Article 50</a>).</p>
</li>
<li><p><strong>Everything else</strong> — many AI systems fall outside the high-risk obligations, but customers, auditors, and procurement may still expect clear evidence of intent, evaluation, and monitoring.</p>
</li>
</ul>
<p><img src="https://ormedian.com/blog/eu-ai-act/hero.png" alt="EU AI Act evidence as a release artefact" /></p>
<p><em>Evidence stays coupled to the deployed version — not scattered across tools.</em></p>
<p>Critically, the Act doesn’t just set one-time rules; it imposes ongoing obligations across the AI system’s lifecycle. This includes maintaining up-to-date technical documentation, keeping detailed logs, and actively monitoring post-market performance for issues. These evidence obligations are where many teams struggle – and that’s exactly the gap this post will address.</p>
<h2 id="heading-why-this-post-exists-evidence-is-the-new-delivery-artefact">Why this post exists: “evidence” is the new delivery artefact</h2>
<p>Most teams don’t fail governance because they lack intent. They fail because evidence is not treated like a first-class release artefact.</p>
<ul>
<li><p>Docs live in a wiki.</p>
</li>
<li><p>Metrics live in an experiment tracker.</p>
</li>
<li><p>Logs live in a monitoring tool.</p>
</li>
<li><p>Risk notes live in someone’s head (or a one-off PDF).</p>
</li>
</ul>
<p><em>Evidence stays coupled to the deployed version — not scattered across tools.</em></p>
<p>Then procurement, auditors, or regulators ask: <em>“Show me what changed between v1.3 and v1.4, and why that change is safe.”</em></p>
<p>That’s the gap the EU AI Act amplifies: it pushes organizations to maintain technical documentation, logs, and post-market monitoring as ongoing lifecycle obligations, not a one-time “paperwork sprint.” In other words, if you can’t produce up-to-date evidence for how your AI system was built, how it operates, and how it’s being overseen, you won’t meet the bar.</p>
<p>The practical take is simple:</p>
<blockquote>
<p><strong>Ship an Assurance Pack for every release.</strong><br /><em>A versioned, shareable evidence bundle that stays coupled to the system version you actually deployed.</em></p>
</blockquote>
<p>This article explains:</p>
<ol>
<li><p>the EU AI Act timeline (what kicks in when),</p>
</li>
<li><p>the evidence obligations that matter most for engineering teams, and</p>
</li>
<li><p>how to operationalize them as a release artefact (Assurance Packs) so evidence stays current.</p>
</li>
</ol>
<h2 id="heading-eu-ai-act-in-5-minutes-the-phased-timeline-that-matters">EU AI Act in 5 minutes: the phased timeline that matters</h2>
<p><img src="https://ormedian.com/blog/eu-ai-act/timeline.png" alt="EU AI Act timeline milestones" /></p>
<p><em>High-level timeline of key enforcement milestones.</em></p>
<p>The EU AI Act entered into force in 2024 and applies progressively over the next few years. The Commission’s Service Desk <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act">timeline</a> is the simplest way to track what takes effect when:</p>
<p><em>High-level timeline of key enforcement milestones.</em></p>
<ul>
<li><p>02 Feb 2025 — General provisions start to apply, including definitions (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-3">Article 3</a>), AI literacy (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-4">Article 4</a>), and prohibited practices (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-5">Article 5</a>) (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act">timeline</a>).</p>
</li>
<li><p>02 Aug 2025 — Governance structures and general-purpose AI (GPAI) obligations begin to apply (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act">timeline</a>).</p>
</li>
<li><p>02 Aug 2026 — The majority of rules apply; this is <em>“day one” of enforcement</em> for many high-risk requirements (notably systems in <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3">Annex III</a>) and transparency obligations (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50">Article 50</a>). Member States should have at least one AI regulatory sandbox operational by this date (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act">timeline</a>).</p>
</li>
<li><p>02 Aug 2027 — High-risk obligations extend to certain AI systems embedded in regulated products (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-1">Annex I</a>) (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act">timeline</a>).</p>
</li>
</ul>
<p>If you remember nothing else: you don’t have “years” to think about evidence. The discipline needs to be built into delivery now. By August 2026, any high-risk AI system you offer in the EU must have compliance evidence ready on demand – and if you’re selling into regulated industries or public sector, the <em>expectation</em> (even before 2026) is to show this mindset today.</p>
<h2 id="heading-first-decision-are-you-even-in-scope-and-at-what-risk-level">First decision: are you even in scope, and at what risk level?</h2>
<h3 id="heading-the-act-is-risk-based">The Act is risk-based</h3>
<p>As noted, the AI Act is risk-based: it defines prohibited practices (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-5">Article 5</a>), high-risk systems (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-6">Article 6</a> with lists in <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-1">Annex I</a> and <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3">Annex III</a>), transparency obligations (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50">Article 50</a>), and the core definitions you’ll need to interpret all of that (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-3">Article 3</a>).</p>
<ul>
<li><p>Prohibited AI – Don’t go there. These are use-cases banned outright (e.g. exploiting vulnerabilities of specific groups, social scoring, certain types of biometric surveillance) (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-5">Article 5</a>). No compliance regime exists because you’re not allowed to deploy them at all.</p>
</li>
<li><p>High-risk AI – This is the most consequential category for compliance. High-risk systems are the only ones subject to the full brunt of the AI Act’s requirements (technical documentation, logging, monitoring, human oversight, etc.).</p>
</li>
<li><p>Transparency-risk AI – These include systems like conversational AI or deepfake generators that aren’t high-risk but require user disclosures or other transparency measures (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50">Article 50</a>).</p>
</li>
<li><p>Minimal-risk AI – Everything else. No specific obligations (beyond existing laws), though voluntary best practices are encouraged.</p>
</li>
</ul>
<h3 id="heading-high-risk-classification-the-engineering-trigger">High-risk classification (the engineering trigger)</h3>
<p>So what makes an AI system “high-risk”? The classification rules are in <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-6">Article 6</a>. In simple terms, an AI system is high-risk if either:</p>
<ol>
<li><p>it’s intended to be used as a safety component of a regulated product (or is itself such a product) listed in <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-1">Annex I</a> (think: brakes controlled by AI, AI in medical devices); or</p>
</li>
<li><p>it’s one of the use-cases listed in <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3">Annex III</a> (areas like education, employment, critical infrastructure, law enforcement, etc.).</p>
</li>
</ol>
<p>There is nuance: some Annex III systems can fall out of scope if they do not pose a significant risk — but those exceptions are narrow and must be justified and documented (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-6">Article 6</a>).</p>
<p>In short, “high-risk” is the default if your use-case is in Annex III. The burden is on you to prove otherwise. When in doubt, assume high-risk and prepare the evidence accordingly.</p>
<h3 id="heading-ai-model-vs-ai-system">“AI model” vs “AI system”</h3>
<p>Another scope question: many teams use pretrained models or APIs (think GPT-4, vision APIs). Are you a provider of an AI system, or just a user of an AI model? The definitions you need (including “AI system” and related actor roles) are in <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-3">Article 3</a>. In practice, a model becomes part of an AI system when you add the surrounding components (data inputs, a user interface, decision logic, and operating context) that define its purpose.</p>
<p>Why does this matter? Because obligations differ depending on whether you are:</p>
<ul>
<li><p>a provider placing an AI system on the EU market or putting it into service,</p>
</li>
<li><p>a deployer using an AI system internally,</p>
</li>
<li><p>a provider of a general-purpose AI model (e.g. offering a foundation model), or</p>
</li>
<li><p>part of the distribution chain (importer, distributor).</p>
</li>
</ul>
<p>For example, if you fine-tune a large language model and offer it as a SaaS product for hiring, you’re the provider of a high-risk AI system (employment = <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3">Annex III</a>). If you just consume someone else’s API for internal use, you’re a deployer and have a lighter (but not zero) load. As an engineering leader, identify which role fits you now (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-3">Article 3</a>): it determines how you approach evidence and ongoing obligations.</p>
<h2 id="heading-the-evidence-triangle-what-the-eu-ai-act-forces-you-to-produce-and-maintain">The “evidence triangle”: what the EU AI Act forces you to produce and maintain</h2>
<p>Strip away the legalese and you can boil the Act’s compliance down to three continuous evidence streams (for high-risk systems):</p>
<p><img src="https://ormedian.com/blog/eu-ai-act/evidence-triangle.png" alt="Evidence triangle: technical documentation, logs, and post-market monitoring" /></p>
<p><em>Evidence triangle: technical documentation + logs + post-market monitoring.</em></p>
<ol>
<li><p>Technical documentation – Required under <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-11">Article 11</a>, with the detailed contents in <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV</a>. It covers what the system is, intended purpose, how it works, how it was validated, and how it evolves between versions.</p>
</li>
<li><p>Record-keeping (logging) – Automatic logs that capture the system’s operation and outcomes, sufficient to trace issues or decisions (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-12">Article 12</a>). Essentially, an audit trail for the AI’s functioning.</p>
</li>
<li><p>Post-market monitoring – A proactive plan and system for monitoring the AI after deployment, to detect problems or degradation and take action (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a>). This includes reporting serious incidents under <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-73">Article 73</a>.</p>
</li>
</ol>
<p>These three obligations reinforce each other. Think of it as an evidence triangle: documentation, logging, and monitoring. If one side is weak, the whole assurance collapses. For example, logs without documentation are just data with no context; documentation without monitoring becomes stale fiction; monitoring without logs means you can’t investigate incidents.</p>
<h2 id="heading-the-minimum-evidence-set-practical-mapping">The minimum evidence set (practical mapping)</h2>
<p>If you want a practical way to think about evidence, treat these as three versioned outputs that ship with every release:</p>
<p><img src="https://ormedian.com/blog/eu-ai-act/evidence-mapping.png" alt="Mapping EU AI Act obligations to concrete evidence artefacts" /></p>
<p><em>Mapping obligations to concrete evidence artefacts.</em></p>
<ul>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-11">Article 11</a> + <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV</a> → a technical documentation bundle for the specific version you shipped.</p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-12">Article 12</a> → logging and record-keeping capability (structured event logs, retained as required).</p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a> → a post-market monitoring plan plus a running monitoring process/system.</p>
</li>
</ul>
<h2 id="heading-1-technical-documentation-annex-iv-is-the-spine">1) Technical documentation: Annex IV is the spine</h2>
<p>For high-risk systems, Annex IV of the Act sets out what technical documentation must include. In short, it’s a comprehensive “tech spec + compliance report” for your AI system. The Commission’s service desk provides a view of Annex IV and a link to the official text:</p>
<ul>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV (Service Desk summary)</a> – a readable breakdown of the requirements.</p>
</li>
<li><p><a target="_blank" href="https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng">Official act text (EUR-Lex)</a> – the legal text, if you enjoy that.</p>
</li>
</ul>
<p>A practical (non-exhaustive) reading of Annex IV looks like this:</p>
<ul>
<li><p>Intended purpose + versioning – What is the system meant to do (and not do), who built it, and the exact version (including how it relates to previous versions).</p>
</li>
<li><p>System architecture + integration – How the AI model and other components work together (e.g. data pipeline, UI, APIs) and the context it operates in.</p>
</li>
<li><p>Data – The characteristics and provenance of training and test data: where it came from, how it was collected or annotated, any biases or limitations.</p>
</li>
<li><p>Development process – How you built the model (did you fine-tune a pretrained model? use AutoML? what steps in training?).</p>
</li>
<li><p>Model performance and limits – The AI’s capabilities and accuracy (overall and on specific relevant groups), and its known limitations or foreseeable unintended outcomes.</p>
</li>
<li><p>Validation and testing – How you validated the model: which metrics, test datasets, robustness checks, and the results (including test logs and reports).</p>
</li>
<li><p>Risk management – The known risks (e.g. failure modes, potential for bias, cybersecurity vulnerabilities) and the measures in place to mitigate them (which ties to the required risk management system from Article 9).</p>
</li>
<li><p>Changes and versions – If you update the AI, what changed and why, and evidence the new version still meets the documented intent, tests, and controls.</p>
</li>
<li><p>Post-market plan – Annex IV expects the post-market monitoring plan (from <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a>) to be part of your technical documentation bundle for each version.</p>
</li>
</ul>
<p><strong>Key point:</strong> Annex IV makes “versioned evidence” unavoidable. It explicitly ties documentation to the AI system’s version and lifecycle. This is why treating evidence as a continuous <em>release artefact</em> (not a one-time PDF) is the sane way forward. Every time you ship a new model or update, the documentation needs to be updated and re-issued.</p>
<h2 id="heading-2-record-keeping-logs-as-a-compliance-primitive-article-12">2) Record-keeping: logs as a compliance primitive (Article 12)</h2>
<p>High-risk AI systems must “technically allow for the automatic recording of events (logs) over the lifetime of the system” (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-12">Article 12</a>). In other words, you need to build in logging from the start – it’s not optional. The logs should be detailed enough to:</p>
<ul>
<li><p>Trace decisions and outcomes – If something goes wrong, the log should help identify where and why (think of it as a flight recorder for the AI).</p>
</li>
<li><p>Facilitate post-market monitoring – Your monitoring system (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a>) will rely on logs to detect issues. No logs = no meaningful monitoring.</p>
</li>
<li><p>Capture relevant data – For some systems, the law spells out additional details to log (like timestamp of each use, data checked, who verified the results).</p>
</li>
</ul>
<p>Article 12 is short but potent. This isn’t just “keep some logs because it’s good practice.” It effectively means if you can’t reconstruct what happened in your AI’s operation, you can’t defend your system’s position in an audit or investigation. Engineering teams should treat logging as part of the model’s design: decide <em>what events</em> are critical (inputs? outputs? decisions? model version used?), define a <em>schema</em>, and ensure logs are stored securely and retained as long as required.</p>
<p>One practical tip: the logs required here feed directly into both incident response and monitoring. So design your logging with those use cases in mind (e.g. log the model version and configuration for each prediction, so later you can pinpoint which version was running when an incident occurred).</p>
<h2 id="heading-3-post-market-monitoring-a-plan-continuous-system-article-72">3) Post-market monitoring: a plan + continuous system (Article 72)</h2>
<p><img src="https://ormedian.com/blog/eu-ai-act/monitoring-loop.png" alt="Post-market monitoring loop" /></p>
<p><em>Signals → thresholds → actions → evidence update loop.</em></p>
<p>Article 72 requires providers of high-risk AI systems to implement a post-market monitoring system <em>and</em> have a post-market monitoring plan to guide it. Think of this as setting up an ongoing process to watch your AI in the wild and catch problems early.</p>
<p>A few details matter for engineering teams:</p>
<ul>
<li><p>The monitoring plan must be documented as part of your technical documentation (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a> and <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV</a>). That means by the time you go to market, you need a written plan for how you will monitor the AI, what data you’ll collect, and how you’ll evaluate performance and risks over time.</p>
</li>
<li><p>The European Commission is tasked with providing a template and required elements for this plan by 2 February 2026 (Article 72(3); see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a>). Expect a standardized format for what your monitoring plan should include (likely metrics to track, frequency of checks, and processes for handling incidents).</p>
</li>
</ul>
<p>What does post-market monitoring look like in practice? It means you can’t just “set and forget” your model after deployment. You need to actively collect data on how it’s performing – accuracy, drift, error rates, potentially bias metrics, uptime, etc. – and also capture any incidents or near-misses. If performance degrades or new risks emerge (say, the data input distribution shifts), you’re expected to notice and do something (retrain, update the model, or even pull the model from service in extreme cases).</p>
<p>One way to think of it: Model monitoring is like CI/CD for risk and performance. Just as you wouldn’t deploy software without monitoring uptime and errors, the AI Act pushes you not to deploy AI without monitoring its real-world behavior and impact.</p>
<h2 id="heading-bonus-serious-incident-reporting-article-73">Bonus: serious incident reporting (Article 73)</h2>
<p>Post-market monitoring feeds into another obligation: serious incident reporting. Under <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-73">Article 73</a>, providers of high-risk AI systems must be able to detect serious incidents and report them to authorities.</p>
<p>A “serious incident” basically means something went really wrong – e.g. the AI caused someone harm or a major near-miss. The law sets timelines for reporting, including accelerated timelines in some cases. The idea is to give regulators a heads-up on emerging risks.</p>
<p>For engineering teams, this is an extension of incident response. You’ll need to define what constitutes a serious incident for your system (some of this is defined in the law, but you should operationalize it) and have the mechanisms to: (a) detect that it occurred (likely via your logs and monitoring signals), and (b) have a process to investigate and report on it quickly. It’s another reason why logging and version traceability are so important – if you can’t connect an incident to a specific model version and situation, you’ll have a hard time reporting the “causal link” as the law requires.</p>
<h2 id="heading-what-this-means-in-practice-start-ups-smes-and-enterprises">What this means in practice (start-ups, SMEs, and enterprises)</h2>
<h3 id="heading-if-youre-a-new-company-start-up-sme">If you’re a new company (start-up / SME)</h3>
<p>Your advantage: you can design evidence into your delivery pipeline from day one. You’re not stuck with legacy systems or habits. And you’ll need that advantage, because compliance is a big ask for a small team.</p>
<p>The Act includes measures intended to support SMEs and start-ups, including regulatory sandboxes (Article 57) and support measures (Article 62) in the official text (<a target="_blank" href="https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng">ai-act-eurlex</a>). In short, regulators recognise this can be hard for smaller teams and include mechanisms to reduce friction.</p>
<h4 id="heading-founder-playbook">Founder playbook</h4>
<ul>
<li><p>Pick a compliance-friendly beachhead. If you’re a founder, consider targeting a domain where evidence is already valued (healthcare, finance, public sector). Compliance can become a competitive advantage if you bake it in early.</p>
</li>
<li><p>Treat evidence as part of the product. Shift your mindset from “we deliver a model or service” to “we deliver a model with an assurance packet.” When selling to enterprise or government, being able to hand over an Assurance Pack (see below) with all the documentation and logs can compress procurement and due diligence. It answers questions like: “What is it for and not for?”, “How was it evaluated?”, “What are the known risks and mitigations?”, and “How will you detect and handle issues?”</p>
</li>
<li><p>Leverage sandboxes and resources. If a regulatory sandbox is available in your country, use it to pressure-test your evidence and monitoring approach. Any outputs can go into your assurance package — but validate the sandbox terms with counsel and the competent authority.</p>
</li>
</ul>
<p>Even if your AI system is currently minimal risk, consider documenting it and monitoring it as if you had obligations. It’s easier to start with good habits than to retrofit them. Plus, enterprise customers are increasingly asking for evidence (e.g. model cards, test results, risk assessments) even for non-high-risk AI. Be ahead of the curve.</p>
<h3 id="heading-if-youre-an-existing-organization-enterprise">If you’re an existing organization (enterprise)</h3>
<p>You likely have some governance structure (responsible AI committees, model documentation, etc.). The typical failure mode is fragmentation and drift:</p>
<ul>
<li><p>The Confluence wiki says one thing about the model, the production code says another.</p>
</li>
<li><p>You have a model card that was made at launch, but the model has been updated 5 times since then.</p>
</li>
<li><p>Risk assessments were done in a workshop, but the mitigations were never implemented or tracked.</p>
</li>
<li><p>Different teams use different tools, and evidence is all over the place (or lost when people leave).</p>
</li>
</ul>
<p>The AI Act’s evidence requirements essentially force alignment and single source of truth. For example, you can’t let the model drift away from its documentation – by law that documentation must be kept in sync with the current system version (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV</a>). This is as much an organizational challenge as a technical one.</p>
<p><strong>Fastest win for an enterprise:</strong> Stop treating documentation and compliance as a separate, downstream process. Integrate it into your ML workflow. For instance, require that every model that goes to production has an “assurance package” generated (even if lightweight at first) that includes all key documentation and evidence artifacts, and store it versioned in a central repository. This way, if someone asks “what changed between last version and this?”, you not only have a git diff of the code, but a diff of the assurance package (data, metrics, documentation updates, risk logs, etc.).</p>
<p>Also, start mapping existing tools to these needs. Maybe your experiment tracking can output evaluation reports, your DataOps can provide data provenance, etc. You don’t necessarily need one giant new system; you can script pulling pieces together. The point is to bundle and version it.</p>
<h3 id="heading-legacy-systems">Legacy systems</h3>
<p>What about AI systems you already have in the field? Transitional rules and what counts as a material change matter — treat it as a governance decision and check the official text and the Commission timeline (<a target="_blank" href="https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng">ai-act-eurlex</a>, <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act">timeline</a>).</p>
<p>A pragmatic approach: start generating evidence packs for new releases now, and then backfill legacy systems based on priority. Identify which existing AI systems would be considered high-risk (an internal audit can map your systems to Annex III categories). For the most critical ones, it’s worth doing a post hoc documentation and risk assessment exercise to have something on file. For others, plan an update and treat that update as the point where you bring it into compliance (i.e. use the update process to generate the necessary documentation and monitoring).</p>
<p>The key is: don’t assume “old = exempt.” Check the rules and document your rationale for how you handle legacy systems and updates.</p>
<h2 id="heading-the-assurance-pack-idea-operationalizing-annex-iv-articles-12-amp-72">The Assurance Pack idea: operationalizing Annex IV + Articles 12 &amp; 72</h2>
<p>In practice, I keep coming back to one principle:</p>
<blockquote>
<p><strong>Evidence must be coupled to the version you deploy.</strong></p>
</blockquote>
<p>This principle flows directly from the Act’s evidence obligations: technical documentation (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-11">Article 11</a> and <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV</a>), record-keeping (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-12">Article 12</a>), and post-market monitoring (<a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a>). You can’t satisfy those by doing a big Word document at launch and forgetting about it. It’s a continuous process.</p>
<p>So, the Assurance Pack is a pragmatic response: make the evidence package a versioned artefact just like your code or model.</p>
<p>Imagine for each AI release, alongside your model artifact and API, you produce a folder (and a downloadable bundle) that contains <em>all the key evidence</em> for that version. That’s an Assurance Pack. It would include things like:</p>
<ul>
<li><p>The intended use and limitations of the AI (what you’d put in Section 1 of Annex IV).</p>
</li>
<li><p>The architecture and integration description.</p>
</li>
<li><p>Data sheet: info on training data, when the data was collected, any updates.</p>
</li>
<li><p>Evaluation results: metrics, validation reports, bias analyses for this version.</p>
</li>
<li><p>Change log: how this version differs from the last (e.g. “added 10k more training samples from X, fixed bug in preprocessing Y, improved accuracy by 2% on minority class”).</p>
</li>
<li><p>Risk assessment: an updated risk log or checklist (e.g. “still residual risk Z remains, but acceptable with mitigation Q”).</p>
</li>
<li><p>Logging specification: what events the system logs, and where logs are kept.</p>
</li>
<li><p>Post-market monitoring plan: what we’re tracking in production for this version, any new signals or thresholds added.</p>
</li>
<li><p>Approvals: who reviewed and approved this release for compliance (e.g. the responsible AI lead signs off).</p>
</li>
</ul>
<p>All of the above corresponds to Annex IV or related obligations. By packaging it, you ensure it travels with the software.</p>
<h3 id="heading-what-an-assurance-pack-looks-like-example">What an Assurance Pack looks like (example)</h3>
<p>Here’s a conceptual structure of an Assurance Pack:</p>
<p><img src="https://ormedian.com/blog/eu-ai-act/assurance-pack-structure.png" alt="Assurance Pack structure" /></p>
<p><em>Assurance Pack structure + manifest.</em></p>
<pre><code class="lang-bash">assurance-pack/
├── manifest.yaml
├── system/
│   ├── intended_use.md
│   ├── system_overview.md
│   ├── architecture.md
│   ├── integration.md
│   ├── limitations.md
│   └── change_log.md
├── data/
│   ├── provenance.md
│   ├── preprocessing.md
│   └── labeling.md
├── evaluation/
│   ├── eval_plan.md
│   ├── metrics_summary.json
│   ├── slice_results.csv
│   └── robustness_tests.md
├── test_logs/
│   └── ... (raw logs or links to <span class="hljs-built_in">test</span> run outputs)
├── risk/
│   ├── risk_assessment.md
│   ├── risk_controls.md
│   └── human_oversight.md
├── logging/
│   ├── logging_spec.md
│   ├── event_schema.json
│   └── retention_policy.md
├── monitoring/
│   ├── post_market_monitoring_plan.md
│   ├── signals_and_thresholds.yaml
│   ├── incident_playbook.md
│   └── escalation_contacts.md
├── governance/
│   ├── roles_and_accountability.md
│   └── approvals.md
└── attestations/
    ├── hashes.txt
    └── signature.sig
</code></pre>
<p>And a minimal manifest.yaml inside might contain metadata and pointers:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">pack_version:</span> <span class="hljs-string">"0.1"</span>

<span class="hljs-attr">system:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">"ExampleAI"</span>
  <span class="hljs-attr">release_version:</span> <span class="hljs-string">"1.4.0"</span>
  <span class="hljs-attr">previous_release:</span> <span class="hljs-string">"1.3.2"</span>

<span class="hljs-attr">provider:</span> <span class="hljs-string">"YourCompany"</span>
<span class="hljs-attr">deployment_context:</span> [<span class="hljs-string">"EU"</span>]

<span class="hljs-attr">classification:</span>
  <span class="hljs-attr">high_risk_candidate:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">rationale:</span> <span class="hljs-string">"Annex III use-case (employment) – likely high-risk"</span>

<span class="hljs-attr">evidence:</span>
  <span class="hljs-attr">technical_documentation:</span>
    <span class="hljs-attr">annex_iv_mapping:</span> <span class="hljs-string">"system/system_overview.md"</span>
  <span class="hljs-attr">record_keeping:</span>
    <span class="hljs-attr">article_12_logging_spec:</span> <span class="hljs-string">"logging/logging_spec.md"</span>
  <span class="hljs-attr">post_market_monitoring:</span>
    <span class="hljs-attr">article_72_plan:</span> <span class="hljs-string">"monitoring/post_market_monitoring_plan.md"</span>

<span class="hljs-attr">quality_gates:</span>
  <span class="hljs-attr">min_accuracy:</span> <span class="hljs-number">0.92</span>
  <span class="hljs-attr">regression_tolerance:</span> <span class="hljs-number">0.01</span>

<span class="hljs-attr">approvals:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">role:</span> <span class="hljs-string">"Responsible AI Lead"</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">"Dr. Jane Doe"</span>
    <span class="hljs-attr">date:</span> <span class="hljs-string">"2026-01-24"</span>
</code></pre>
<p>It’s intentionally not fancy. The <em>wow</em> factor isn’t the YAML or the folder tree – it’s the process change: making this evidence packet a required output of every release and treating it with the same importance as the model artifact or source code. That enables automation (e.g. you can diff two manifests to see what changed, or automatically check if certain fields are present or within policy thresholds).</p>
<h3 id="heading-how-it-fits-into-delivery-conceptual">How it fits into delivery (conceptual)</h3>
<p><img src="https://ormedian.com/blog/eu-ai-act/delivery-pipeline.png" alt="Evidence as a release artefact in the delivery pipeline" /></p>
<p><em>How evidence becomes a release artefact in the delivery pipeline.</em></p>
<p>How do you actually produce these without killing your team’s velocity? The trick is to automate as much as possible and make it part of CI/CD:</p>
<ol>
<li><p>Develop &amp; train as usual (not much change here).</p>
</li>
<li><p>Evaluate as usual – but ensure your evaluation spits out artifacts (metrics, graphs, maybe a PDF report or JSON summary).</p>
</li>
<li><p>Assemble the pack: Have a script or CI job that gathers all the pieces (the model card from your training pipeline, the evaluation metrics, the data schema, etc.) and puts them in the right folder structure. Some pieces might be written by humans (e.g. “<a target="_blank" href="http://limitations.md">limitations.md</a>” might be manually maintained), but many can be generated or templated.</p>
</li>
<li><p>Validate the pack: Just like you wouldn’t deploy if tests fail, set a rule that you don’t deploy if the assurance pack is incomplete. For example, check that <code>metrics_summary.json</code> shows performance above your required threshold, that <code>risk_</code><a target="_blank" href="http://assessment.md"><code>assessment.md</code></a> has been updated for this version (maybe require a specific commit message or checklist).</p>
</li>
<li><p>Publish/store the pack: Store it in artifact storage or attach it to the release in your repository. Treat it as deliverable. If an auditor or client wants it later, it should be readily accessible.</p>
</li>
<li><p>Deploy the AI system (if all gates pass, including evidence checks).</p>
</li>
</ol>
<p>If you’ve heard of software bills of materials (SBOM) in cybersecurity, this is a similar concept – an evidence BOM for AI. It ensures that whenever your AI ships, the evidence of its trustworthiness ships with it.</p>
<h2 id="heading-what-auditors-and-procurement-actually-ask-and-how-packs-answer">What auditors and procurement actually ask (and how packs answer)</h2>
<p>When facing an audit or customer security review, you’ll often get a spreadsheet or questionnaire. Assurance Packs, if done well, let you answer many questions by just handing them the pack (or extracting the relevant parts quickly).</p>
<p>Typical questions and where the pack addresses them:</p>
<ol>
<li><p>“What is this AI system for, and what are its limitations?” – Check <em>intended_</em><a target="_blank" href="http://use.md"><em>use.md</em></a> and <a target="_blank" href="http://limitations.md"><em>limitations.md</em></a> in the pack. This should clearly state the purpose, scope, and contexts where the AI should (or should not) be used. It’s basically your Annex IV Section 1(a) &amp; (c) info.</p>
</li>
<li><p>“Show me how you evaluated it for that use.” – The pack’s <em>evaluation/</em> folder should have an evaluation plan (<em>eval_</em><a target="_blank" href="http://plan.md"><em>plan.md</em></a>) and results (metrics, slices, robustness tests). This demonstrates that you tested the system on relevant data and know its performance (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV</a>).</p>
</li>
<li><p>“What are the risks and how do you mitigate them?” – The <em>risk/</em> section of the pack holds your risk assessment and any mitigations (like human oversight measures, fail-safes, bias mitigations). This aligns with the Act’s risk management expectations (Article 9 and Annex IV sections on risk). If you identified, say, a risk of bias against a subgroup, you’d note it and perhaps point to a control (maybe a periodic bias audit in monitoring).</p>
</li>
<li><p>“How do you detect and handle issues in production?” – The <em>monitoring/</em> folder includes the post-market monitoring plan (signals, thresholds, actions) and an incident response playbook. This shows that you’re not just throwing the model over the wall – you have a process to watch it and respond.</p>
</li>
<li><p>“Can you trace decisions if something goes wrong?” – The <em>logging/</em> folder holds the logging specification and schema, which shows you’ve built traceability (and you can even provide sample log data if appropriate, or show an incident report tracing through logs).</p>
</li>
<li><p>“Who takes responsibility?” – The <em>governance/roles_and_</em><a target="_blank" href="http://accountability.md"><em>accountability.md</em></a> and <a target="_blank" href="http://approvals.md"><em>approvals.md</em></a> should list the accountable roles (e.g. who the provider is, who the product owner is, etc.) and who signed off on the release. This aligns with the Act’s emphasis on defined responsibilities and quality management (Article 17 even requires a Quality Management System with assigned responsibilities).</p>
</li>
</ol>
<p>By structuring evidence this way, you turn a potential 2-week back-and-forth Q&amp;A into a quick lookup. It also impresses auditors: it shows maturity if you can <em>immediately</em> pull up the exact document or data they ask for, versioned for the release in question. No scrambling through folders named “final_final_really.pdf” – you have it organized.</p>
<h2 id="heading-common-failure-modes-and-how-to-design-around-them">Common failure modes (and how to design around them)</h2>
<p>Through working on these issues, we’ve seen patterns of how evidence efforts can falter. Here are some common pitfalls and how a release-focused approach addresses them:</p>
<p><img src="https://ormedian.com/blog/eu-ai-act/version-diff.png" alt="Version-to-version evidence diffs" /></p>
<p><em>Version-to-version evidence diffs make audits/procurement faster.</em></p>
<ul>
<li><p><strong>Stale documentation:</strong> The classic “doc written once and never updated.” Six months later, it’s out of sync with the actual system.<br />  <em>Solution:</em> Make documentation part of the release checklist. If the pack for version 1.4.0 isn’t present or is missing sections, the release doesn’t get tagged. By tying docs to each version, you force updates as part of the dev process, not as an afterthought.</p>
</li>
<li><p><strong>Metrics without context:</strong> Teams often throw a few performance numbers into a report. But without context (what was the test set? what slices or edge cases were checked? what’s the target performance?), numbers mean little.<br />  <em>Solution:</em> Require an evaluation plan and sliced results in the pack. This means before training, you define how you’ll validate (which forces thinking about relevant scenarios). And after training, you include not just “overall accuracy 94%” but, for example, “accuracy per subgroup, worst-case = 88% on subgroup X” plus any stress tests. That gives a fuller picture.</p>
</li>
<li><p><strong>Monitoring that doesn’t match risks:</strong> It’s common to see a monitoring dashboard that tracks CPU usage and maybe overall prediction count – but nothing about whether the model is drifting or making more errors.<br />  <em>Solution:</em> In the monitoring plan, explicitly tie metrics to risks. If your risk assessment says “may have lower accuracy on older patients,” then your monitoring should include a check on performance by age (if you can get that data) or at least a proxy. If bias is a risk, include some bias indicators. Essentially, close the loop: each major risk should have a corresponding monitoring signal or periodic check.</p>
</li>
<li><p><strong>No traceability across versions:</strong> When something goes wrong, the team isn’t sure which model version was running or what data was used. This is a nightmare for accountability (and regulators won’t accept “we’re not sure which model was live”).<br />  <em>Solution:</em> Always log the model version (and ideally a unique identifier for the model build) on every prediction or decision. And in your pack’s manifest, maybe include a hash of the model file and code used. That way, you can always match a log entry to the exact version and evidence pack. Also, store packs in an immutable store (even just a versioned S3 bucket) so you can pull up the exact docs that correspond to a version.</p>
</li>
<li><p><strong>No incident playbook:</strong> When an issue happens, it’s chaos – people aren’t sure who leads the investigation, or how to decide if something is serious enough to report to regulators.<br />  <em>Solution:</em> Include an incident response plan in the pack (and of course, internally train on it). It should say: if X type of incident occurs, here’s how to assess it, here’s who convenes to decide next steps, here’s how to file a report under Article 73 if needed. This way, every release reminds the team “we have a process if something goes wrong.” It’s much easier to follow a plan than to ad-lib under pressure.</p>
</li>
</ul>
<h2 id="heading-a-starter-outline-for-an-article-72-post-market-monitoring-plan">A starter outline for an Article 72 post-market monitoring plan</h2>
<p>The Commission is expected to adopt a template for the post-market monitoring plan by 2 February 2026 (Article 72(3); see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a>). But you don’t have to (and shouldn’t) wait for that to start planning. Here’s a practical outline you can start filling in now, which should align well with Article 72’s intent:</p>
<p><strong>1) Scope and context:</strong> Define which system this plan covers (name, version, etc.), its intended use, and deployment context. Note who the provider is and who the deployers are (if different). Essentially, set the scene: “This plan monitors <em>System X version Y</em> in <em>Z environment</em> for <em>intended purpose P</em>.”</p>
<p><strong>2) Risks and assumptions:</strong> Summarize the key risks from your risk assessment. “We are particularly watching for performance degradation on dataset drift, or an uptick in false positives causing potential harm, etc.” List hypotheses like “if metric M goes above threshold T, it could indicate risk R is materializing.”</p>
<p><strong>3) Signals to monitor:</strong> For each risk or important aspect, what metrics or signals will you track? For example: input data drift (monitor distribution of inputs over time), output quality (monitor error rates against a validation dataset or human feedback), bias (monitor outcomes by demographic if available), system uptime, etc. Also external signals like user complaints or support tickets could be included.</p>
<p><strong>4) Thresholds and triggers:</strong> Decide what levels of those signals warrant action. Maybe you set a warning threshold and an incident threshold. E.g. “If weekly accuracy drops by more than 5 points, data science team investigates within 1 week. If drops by 10 points or more, halt model and failover to manual process.” This is essentially your SLA for model performance and risk.</p>
<p><strong>5) Logging and traceability link:</strong> State how you will use logs in monitoring. “All predictions are logged with timestamp and key attributes; monitoring jobs run daily to aggregate these logs and check for anomalies. In case of incident, logs will be analyzed to trace root cause.” (This connects to the Article 12 obligation – your plan should reference that you have the data to support it.)</p>
<p><strong>6) Review cadence:</strong> How often will you formally review the monitoring data and overall system performance? Perhaps you have a monthly governance review where you look at trends, and a yearly full audit. Mention that. The Act will expect you to continuously update the technical documentation with new findings, so tie that in: “Results of monitoring will feed into periodic updates of technical documentation and risk assessment.”</p>
<p><strong>7) Continuous improvement:</strong> Explain how you will update the system or process when issues are found. For instance, “If drift is detected, we will retrain the model on latest data within X weeks,” or “If new risks are identified, we will update the risk log and implement mitigation.” This shows you have a learning loop, not just monitoring in name only.</p>
<p><strong>8) Reporting workflow:</strong> Outline how internal escalation works. “On detecting a serious incident, the on-call ML engineer notifies the AI Risk Officer; a root cause analysis is done; if criteria for <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-73">Article 73</a> reporting are met, we will file a report to authorities within the required timeline.” Basically, tie your plan into the legal reporting duty so it’s clear you won’t drop the ball.</p>
<p>Documenting this now not only prepares you for compliance, it actually helps your team. It’s much easier to sleep at night knowing you have a sensor on the system and a plan for what to do if it blips.</p>
<h2 id="heading-isnt-this-something-platforms-can-just-add">“Isn’t this something platforms can just add?”</h2>
<p>You might wonder: won’t the AWS/GCP/Azure/ModelOps platforms of the world just solve this with a new feature? They will certainly help on parts – for example, logging and monitoring tools are out there, and they can add compliance checklists. But a key differentiator here is portability and version-coupling of evidence.</p>
<p>Many platform solutions focus on dashboards or documents in situ. The Assurance Pack concept is about a portable bundle that you can hand over to an auditor or customer, or move to another platform, and it still makes sense. It’s decoupled from any specific tooling UI. It’s also verifiable in the sense that you can sign it, hash it, and show it wasn’t tampered with.</p>
<p>Think of the security world: cloud providers offer great security centers, but companies still produce their own artifacts for audits (architecture diagrams, access reviews, etc.) that are portable. Similarly, you as AI providers will want a portable evidence artefact that <em>you</em> own and control, which can be shared externally as needed (under NDA or via secure channels likely).</p>
<p>Platform vendors can assist by generating pieces of the evidence (like auto-generating parts of the technical documentation, or providing one-click model cards), and they might even allow exporting a “compliance bundle”. But until that’s common, rolling your own lightweight process as described can give you a head start (and more control).</p>
<h2 id="heading-what-you-should-do-this-quarter">What you should do this quarter</h2>
<p>A call to action for engineering and product teams:</p>
<ol>
<li><p>Determine if you’re in scope (and at what level). Map your AI use-cases to the Act’s categories. Are you possibly high-risk (see <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-6">Article 6</a>)? If yes, is it an <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-1">Annex I</a> regulated-product scenario or an <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3">Annex III</a> use-case? Use the definitions in <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-3">Article 3</a> to clarify your role (provider/deployer/etc.). If you’re not high-risk, great – but remember that enterprise customers might still ask for “AI governance” evidence, so it’s worth adopting a light version of these practices anyway.</p>
</li>
<li><p>Start treating evidence as a deliverable. Kick off an initiative to define what an “Assurance Pack” would be for your main AI system. Pick a recent release and try to compile the key items. This will highlight gaps (e.g. “oh, we never actually wrote down our intended use and limitations clearly”). It’s fine if it’s messy at first. Create a template and iterate.</p>
</li>
<li><p>Build the monitoring plan now. Don’t wait until 2026 when the official template drops. Begin drafting a post-market monitoring plan for your system using the outline above. Even if it’s rough, it will surface questions (What should we monitor? Can we get that data? Who would be on the hook if X happens?) that you’re better off answering sooner rather than later. By the time the Commission’s template arrives, you’ll have a version to align with it, rather than starting from scratch under time pressure.</p>
</li>
</ol>
<p>By focusing on these steps, you’ll not only de-risk compliance, but you’ll likely improve your AI practice overall. It’s the whole “sunlight is the best disinfectant” idea – making yourself document and monitor forces you to build better, more reliable AI.</p>
<h2 id="heading-a-pragmatic-implementation-approach-mapping-to-the-evidence-triangle">A pragmatic implementation approach (mapping to the evidence triangle)</h2>
<p>You don’t need a mega-platform to start. Whether you build this in-house or buy tooling later, aim for three capabilities that map cleanly to the evidence triangle:</p>
<p><img src="https://ormedian.com/blog/eu-ai-act/assurance-wrapper.png" alt="Wrapping an AI system with evidence capture and release packaging" /></p>
<p><em>Wrap an AI system with evidence capture, review, and release packaging.</em></p>
<ul>
<li><p><strong>Evidence Packs (technical documentation):</strong> automate the assembly of a versioned evidence bundle aligned to <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV</a>. Start with templates + a CI job that gathers evaluation artefacts, data provenance notes, and a human-maintained “limitations / intended use” section. The key is that the pack stays <strong>tied to the deployed version</strong> (and is easy to diff between releases).</p>
</li>
<li><p><strong>Monitoring by default (post-market):</strong> make the <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72</a> monitoring plan executable. Define your core signals (performance, drift, safety/bias proxies, incident triggers), wire them into your inference pipeline, and make the monitoring outputs part of the pack for that release (so you can show not only <em>what you planned</em> but <em>what you observed</em>).</p>
</li>
<li><p><strong>Provenance &amp; traceability (logging):</strong> treat <a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-12">Article 12</a> logging as a product primitive. Ensure every decision/prediction is traceable to <em>model version + config + relevant input/output metadata</em> (within privacy constraints). If you want stronger integrity guarantees, hash/sign the pack artefacts so you can later prove “this is the exact evidence for the exact model that ran.”</p>
</li>
</ul>
<p>If you want a starter template for an Assurance Pack (folder tree + manifest), feel free to message me on <a target="_blank" href="https://www.linkedin.com/in/samneering/">LinkedIn</a>.</p>
<h2 id="heading-primary-sources">Primary sources</h2>
<ul>
<li><p><a target="_blank" href="https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng">Regulation (EU) 2024/1689 (EUR-Lex, official text)</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act">Commission Service Desk timeline</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-3">Article 3 — Definitions</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-4">Article 4 — AI literacy</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-5">Article 5 — Prohibited practices</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-6">Article 6 — High-risk classification</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-11">Article 11 — Technical documentation</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-12">Article 12 — Record-keeping</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50">Article 50 — Transparency obligations</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-72">Article 72 — Post-market monitoring</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-73">Article 73 — Serious incident reporting</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-1">Annex I — Regulated products</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3">Annex III — High-risk use-cases</a></p>
</li>
<li><p><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-4">Annex IV — Technical documentation content</a></p>
</li>
</ul>
<h2 id="heading-further-reading">Further reading</h2>
<ul>
<li><a target="_blank" href="https://ai-act-service-desk.ec.europa.eu/en/faq">Commission Service Desk FAQ</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Equality, But at What Cost?]]></title><description><![CDATA[The issue and ethical conundrum with equality of outcome is that those who advocate for this ideal want everyone to end up on the same level, hence the advocacy for equal redistribution of results, regardless of merit, effort, or individual differenc...]]></description><link>https://samueladebayo.com/equality-but-at-what-cost</link><guid isPermaLink="true">https://samueladebayo.com/equality-but-at-what-cost</guid><category><![CDATA[Equality]]></category><category><![CDATA[equity]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Sun, 13 Apr 2025 16:46:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1744562475496/fe45f16c-780f-4287-bca8-fecc9afbc10d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The issue and ethical conundrum with equality of outcome is that those who advocate for this ideal want everyone to end up on the same level, hence the advocacy for equal redistribution of results, regardless of merit, effort, or individual differences. They do this seeking systemic mechanisms that they think will guarantee uniformity of result rather than equality of opportunity.</p>
<p>Perhaps they mean well, envisioning a world free of suffering and class divides, where historical injustices are remedied by enforced parity. Yet someone in full possession of full human DNA should recognise that this ideal without a doubt, is myopic and will only soil the cause of genuine fairness by conflating uniformity with equity.</p>
<p>… geez! A world where everyone ends up the same would be dreadfully monotonous and boring, free from the very chaos that makes our world unique, wouldn't you agree? The vibrancy of human experience, our triumphs, our struggles, and our creative flourishes, all emerging from the interplay of difference, free will, freedom, and responsibilities and not from enforced sameness.</p>
]]></content:encoded></item><item><title><![CDATA[Can AI see Alzheimer's before Symptoms appear?]]></title><description><![CDATA[One of the interesting research projects I collaborated on with colleagues in China last year was using vision to predict the tendency of developing Alzheimer’s disease. We used images from diverse clinical datasets, which gave us a base and provided...]]></description><link>https://samueladebayo.com/can-ai-see-alzheimers-before-symptoms-appear</link><guid isPermaLink="true">https://samueladebayo.com/can-ai-see-alzheimers-before-symptoms-appear</guid><category><![CDATA[2DCNN]]></category><category><![CDATA[3DCNN]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[image processing]]></category><category><![CDATA[ConvolutionalNeuralNetworks]]></category><category><![CDATA[CNN]]></category><category><![CDATA[MedicalAI]]></category><category><![CDATA[#NeuralNetworks]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[AIforgood]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Sun, 09 Feb 2025 20:18:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739131906552/da1c0fed-0ad6-49cb-8fcd-d5c35c3d32cf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the interesting research projects I collaborated on with colleagues in China last year was using vision to predict the tendency of developing Alzheimer’s disease. We used images from diverse clinical datasets, which gave us a base and provided a rich variety of cases. However, the images when marred with uncertainties, often arising from imaging artefacts, variations in acquisition protocols, and intrinsic noise, the prediction became less reliable, hence complicating the analysis.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739132040489/4877f9e3-74ff-4a81-acbe-4df0347e298c.gif" alt class="image--center mx-auto" /></p>
<p>To address this, we proposed a hybrid approach that leverages the strengths of both 2D and 3D convolutional neural networks. In our approach, we engineered the 2D CNN component to capture fine-grained texture details, while the 3D CNN extracts essential volumetric context. A key innovation in our methodology was the creation of virtual augmented slices to account for the additional channel in 3D to encode a series of uncertainties directly within our model. This approach enables us to better represent the ambiguities inherent in the data and to perhaps catch uncertainties that may not be present now in the model but at inference. This led us to achieve better results compared to existing methods.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739131574186/5206102b-d9c1-4b89-ae7c-71d4c870320f.png" alt class="image--center mx-auto" /></p>
<p>You can read the preprint version of our paper here: <a target="_blank" href="https://arxiv.org/pdf/2410.02714">https://arxiv.org/pdf/2410.02714</a><br />Code available on request.</p>
<p>Our current open question is: how can we best leverage multi-modal data to enhance our predictive models further? For instance, integrating additional imaging modalities such as PET scans or incorporating non-imaging biomarkers, like genetic and clinical data, could offer a more holistic view of disease progression. This fusion of diverse data sources might refine early detection strategies and pave the way for more personalised interventions.</p>
]]></content:encoded></item><item><title><![CDATA[Deep Learning for Computer Vision from Scratch]]></title><description><![CDATA[I know the hype is all around LLMs right now, but deep learning for computer vision continues to drive advancements in AI too – from your smartphone to applications at airports. Perhaps you would like to learn how to build yours.
This past summer, I ...]]></description><link>https://samueladebayo.com/deep-learning-for-computer-vision-from-scratch</link><guid isPermaLink="true">https://samueladebayo.com/deep-learning-for-computer-vision-from-scratch</guid><category><![CDATA[Deep Learning]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[visual object tracking]]></category><category><![CDATA[YoloV8]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Sun, 02 Feb 2025 14:13:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738503867172/865ffbd0-cd0d-4e4a-aab1-828cc587b0e7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I know the hype is all around LLMs right now, but deep learning for computer vision continues to drive advancements in AI too – from your smartphone to applications at airports. Perhaps you would like to learn how to build yours.</p>
<p>This past summer, I had the privilege of teaching a 3-day course on Deep Learning for Computer Vision.</p>
<p>The course materials are now open source and available on GitHub: <a target="_blank" href="http://github.com/exponentialR/DL4CV">DL4CV</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738505493238/9b88a193-d1f2-4998-a297-36b883cc78d4.gif" alt class="image--center mx-auto" /></p>
<p>In the course, we began with a recap of Python and an introduction to PyTorch, then explored image computation techniques. We progressed from coding a simple neural network from the ground up to building basic architectures and advancing to deep neural networks such as VGG, ResNet, and YOLO. We also performed inference for emotion recognition, traffic tracking, and simple object tracking.</p>
<p>The repository contains:</p>
<ul>
<li><p>Lecture slides</p>
</li>
<li><p>Practical code samples (in PyTorch)</p>
</li>
<li><p>Datasets for hands-on projects</p>
</li>
<li><p>Step-by-step notebook tutorials</p>
</li>
</ul>
<p>Feel free to explore, fork, and contribute. Your feedback is always welcome. #DeepLearning #ComputerVision #OpenSource</p>
]]></content:encoded></item><item><title><![CDATA[Introducing QUB-PHEO Dataset]]></title><description><![CDATA[From June 2023 to January 2024, our team at the Centre for Intelligent Autonomous Manufacturing Systems, Queen’s University Belfast, conducted an extensive data collection project aimed at advancing human intention inference. Our goal was to validate...]]></description><link>https://samueladebayo.com/introducing-qub-pheo-dataset</link><guid isPermaLink="true">https://samueladebayo.com/introducing-qub-pheo-dataset</guid><category><![CDATA[qub-pheo]]></category><category><![CDATA[human-intention]]></category><category><![CDATA[dataset]]></category><category><![CDATA[Computer Vision]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Sat, 25 Jan 2025 20:20:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737836214250/3460e3fa-f724-476a-8b73-71755fdc6af4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>From June 2023 to January 2024, our team at the Centre for Intelligent Autonomous Manufacturing Systems, Queen’s University Belfast, conducted an extensive data collection project aimed at advancing human intention inference. Our goal was to validate the hypothesis that incorporating multiview, multimodal visual signals, especially fine-grained ones, can enhance the prediction of human intentions with precision down to the nearest second.</p>
<p>QUB-PHEO (Queen’s University Belfast, Perception of Human Engagement in Assembly Operations) is designed to support advanced research in human intention inference and engagement analysis. Our dataset provides a foundation for developing and testing innovative models and applications in this domain.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737836170316/93c031d6-30d2-47f7-a44d-fe01780e811a.gif" alt class="image--center mx-auto" /></p>
<p><strong>Project Highlights:</strong><br /><code>Data Volume</code>: Over 4.5 million 4K frames and 40+ hours of video data<br /><code>Tasks</code>: 9 tasks and 36 subtasks<br /><code>Participants</code>: 70 individuals contributing diverse perspectives</p>
<p>We have thoroughly documented our methodology, approach, and core principles in our recently published paper in IEEE Access. You can access the <a target="_blank" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=10731700">paper here</a></p>
<p>Our datasets and preprocessing code are open source and available on <a target="_blank" href="https://github.com/exponentialR/QUB-HRI">GitHub</a></p>
<p>To gain access to the QUB-PHEO dataset, please visit: <a target="_blank" href="https://github.com/exponentialR/QUB-PHEO">Dataset Access</a></p>
]]></content:encoded></item><item><title><![CDATA[Demystifying Big-O notation]]></title><description><![CDATA[During the first two years of my undergraduate years, I never really understood computational complexities, even after learning about it in my Data Structures and Algorithms class in the second year, it remained a foggy concept. Maybe that’s why I en...]]></description><link>https://samueladebayo.com/demystifying-big-o-notation</link><guid isPermaLink="true">https://samueladebayo.com/demystifying-big-o-notation</guid><category><![CDATA[CS Theory]]></category><category><![CDATA[algorithm]]></category><category><![CDATA[algorithms]]></category><category><![CDATA[Mathematics]]></category><category><![CDATA[data structures]]></category><category><![CDATA[Time Complexity]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Coding Best Practices]]></category><category><![CDATA[software development]]></category><category><![CDATA[Programming concepts]]></category><category><![CDATA[Space Complexity]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Mon, 02 Dec 2024 04:04:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733111868098/aa64ab2c-c42c-4877-98d9-0a2a01b9490e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>During the first two years of my undergraduate years, I never really understood computational complexities, even after learning about it in my Data Structures and Algorithms class in the second year, it remained a foggy concept. Maybe that’s why I ended up with a solid B in the course - the most painful B I got (still haven’t forgiven myself). It was not until my third year, while studying for a project, that I truly grasped what all those <code>O(n)</code>, <code>O(n²)</code>, and <code>O(log n)</code> terms meant. And trust me, when the penny finally dropped, it changed the way I have approached algorithmic inference and, by extension, writing codes.</p>
<p>While most lecturers and tutors approach this topic as abstract and uninteresting, computational complexity is more than just an abstract theory; it's the distinction between an algorithm that executes in seconds and one that could take hours or days. In this brief post, I will look into the essence of computational complexity, aiming to integrate fundamental concepts with real-world examples to help clarify the concepts for me.</p>
<h3 id="heading-what-is-computational-complexity-really">What is computational complexity, really?</h3>
<p>Let’s start with an analogy. imagine you are at a theme park with a queue to ride a rollercoaster; you are trying to figure out how long it will take to get on the ride. Oftentimes, this depends on:</p>
<ol>
<li><p>The number of people in the queue is significant.</p>
</li>
<li><p>How fast does the rollercoaster take people on board?</p>
</li>
</ol>
<p>This is a good analogy for computational complexity, as it is all about how long it takes an algorithm (the rollercoaster) to process the input (the people in the queue). Hence, from this, we can deduce that:</p>
<ul>
<li><p>if the rollercoaster can handle one person at a time, it will take as long as the number of people in the queue. This is called the <strong>linear time</strong> and it is written as <code>O(n)</code>. More on this later.</p>
</li>
<li><p>But if the rollercoaster takes multiple people at once, maybe halves the queue each time, it is much faster. This could be something like <strong>logarithmic time</strong>, written as <code>O(log n)</code>.</p>
</li>
</ul>
<p>Thus, succintly put:</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Computational complexity measures the resources required (such as time or memory) by an algorithm to solve a problem. Typically, we express it as a function of the input size, </strong><code>n</code><strong>, to illustrate the growth of the resource requirement as the input size increases.</strong></div>
</div>

<p>To measure this, we need to take into account, two technical details:</p>
<ol>
<li><p>Time complexity refers to how the runtime of an algorithm increases as n increases.</p>
</li>
<li><p>Space complexity: How the memory usage of an algorithm grows with <em>n.</em></p>
</li>
</ol>
<p>For example, an algorithm that requires storing every item in a list in memory will have a higher space complexity that one that processes items one at a time. These two factors often trade off against each other- the most efficient algorithms balance both.</p>
<h3 id="heading-big-o-notation">Big-O Notation</h3>
<p>In computer science, we often like to complicate things - but only because it is fun and helps us think in more abstractly, after all, where is the joy in understanding something complex without appreciating the beauty of its intrinsic patterns?</p>
<p>On the more technical side of things, the <code>O</code> in Big-O stands for “<strong>order”</strong>, reflecting the asymptotic behaviour of a function as its input size approaches infinity. Hence, in plain English, it tells us how the runtime or space requirements of algorithm grow relative to the size of the input.</p>
<p>Big-O does not care about the exactruntime in seconds (this often depends on the hardware and software); it instead focuses on the growth trend,for example, does the runtime double when the input doubles? or does the runtime increase slightly when the input doubles?</p>
<h3 id="heading-big-o-notation-classifications">Big-O Notation Classifications</h3>
<p>So, we can say Big-O notation is more than just mathematical formality; it is a powerful tool for understanding the efficiency of algorithms across a spectrum of possibilities. Fom lightning-fast constant time operations to painfully slow crawl of exponential growth, Big-O gives us a common language to classify and compare algorithms. Here, we explore the most common classification of Big-O notation, looking at what they mean and how they can shape the way we approach problem-solving in CS.</p>
<ol>
<li><p><strong>Constant Time - O(1)</strong></p>
<p> In this case, an <em>O(1) algorithm</em> does not care about the input size. It always takes the same amount of time. A classic example is accessing an element in an array by its index. No matter how large the array is, this operation is instantaneous because you don’t need to traverse the entire array or perform any calculations on the other elements. The index acts like a direct pointer to the location in memory, giving you immediate access—hence the constant “1” in <code>O(1)</code>. This is what makes <code>O(1)</code> operations so desirable: their runtime doesn’t grow with the size of the input. Here is how it looks in the code:</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_element</span>(<span class="hljs-params">input_array, index_position</span>):</span>
     <span class="hljs-keyword">return</span> input_array[index_position]
</code></pre>
<p> In the above example, no matter how large <code>arr</code> is, the operation of retrieving an element at a specific index will always take the same amount of time. Truth is an O(1) operation is a dream, no matter how large the input is, the runtime remains the same.</p>
</li>
<li><p><strong>Linear Time - O(n)</strong></p>
<p> An algorithm with <em>O(n)</em> time complexity grows linearly with the size of the input. This means that if the input doubles, the runtime will also double - although not as fast <em>O(1),</em> but it is predictable and manageable. Thus, when it comes to <em>O(n),</em> things start to scale proportionally. A common example of <em>O(n)</em> is finding the maximum value in a list, to find the maximum, you would have to look at every single element in the list, from the very start to the last element in the list. This means the time taken will increase linearly as the size of the list grows.</p>
<p> Here is a code implementation of maximum number search:</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">maximum_num_search</span>(<span class="hljs-params">input_array</span>):</span>
     maximum_number = input_array[<span class="hljs-number">0</span>]
     <span class="hljs-keyword">for</span> number <span class="hljs-keyword">in</span> input_array:
         <span class="hljs-keyword">if</span> number &gt; maximum_number: 
             maximum_number = number
     <span class="hljs-keyword">return</span> maximum_number
</code></pre>
<p> In this example, the loop iterates through all n elements of <code>arr</code>. Such that if you have 10 elements, it will take 10 steps; while 10,000 elements will take 10,000 steps. This is the very essence of linear time- growing in lockstep with the input size. A real life example in would be in ML where you calculate the loss for a batch of data during training- the time required scales linearly with the batch size since each sample is processed independently. For example computing mean squred error loss over n predictions:</p>
<pre><code class="lang-python"> loss = sum((y_prediction - y_groundtruth) ** <span class="hljs-number">2</span>) / n
</code></pre>
</li>
<li><p><strong>Quadratic Time - O(n²)</strong></p>
<p> Things start to get less efficient in quadratic time. Algorithms with <code>O(n²)</code> involve nested loops (a loop in a loop), meaning for every element in the input, the algo will process every other element, by extension the runtime grows exponentially as the input size increases. A good example would be checking all pairs of elements in an array, where for each element the algorithm compares it with every other element, leadingto <code>n x n = n²</code> iterations.</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">check_pairs</span>(<span class="hljs-params">input_array</span>):</span>
     <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(len(input_arry)):
         <span class="hljs-keyword">for</span> j <span class="hljs-keyword">in</span> range(len(input_array)):
             print(input_array[i],  input_arry[j])
</code></pre>
<p> likewise, if <code>input_array</code> has 10 elements, the code will run 100 iterations. for 1000 elements, it will run 1,000, 000 iterations. As you can imagine, this quickly becomes impractical for large datasets. For example, calculating the pairwise distance matrix in clustering or dimensionality reduction techniques like t-SNE. Such as given <em>n</em> data points, you compute distances for all n² pairs.</p>
<pre><code class="lang-python"> <span class="hljs-keyword">from</span> sklearn.metrics.pairwise <span class="hljs-keyword">import</span> euclidean_distances
 distances = euclidean_distances(X, X)
</code></pre>
<p> Although <em>O(n²)</em> algorithms can sometimes be unavoidable, you should generally be considered a red flag for scalability.</p>
</li>
<li><p><strong>Logarithmic Time - O(log n)</strong></p>
<p> Let’s get a little clever. This time complexity represents algorithms that grow very slowly, even as the input size increases significantly. This is often achieved by repeatedly dividing the problem into smaller chunks, rather than processing the entire input. Imagine you are searching for a word in a dictionary., you don’t start from the first page and go one by one, instead, you flip to the middle, check if the word is earlier or later alphabetically, and eliminate half of the book in one step. Repeat until you find the work.</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">binary_search</span>(<span class="hljs-params">input_array, target</span>):</span>
     left, right = <span class="hljs-number">0</span>, len(input_array)<span class="hljs-number">-1</span>
     <span class="hljs-keyword">while</span> left &lt;= right:
         mid = (left + right) // <span class="hljs-number">2</span>
         <span class="hljs-keyword">if</span> input_array[mid] == target:
             <span class="hljs-keyword">return</span> mid
         <span class="hljs-keyword">elif</span> input_array[mid] &lt; target:
             left = mid + <span class="hljs-number">1</span>
         <span class="hljs-keyword">else</span>:
             right = mid - <span class="hljs-number">1</span>
</code></pre>
<p> This is a <strong>binary search</strong> in action. Binary search is an ideal example- here the input (often a sorted list) is halved at each step until you get the target value. this approach reduces the number of steps required to get to the final solution as it takes the divide-and-conquer strategy. in this case, if <code>input_array</code> has 1000 elements, binary search might only take about 10 steps to find the target. For 1,000,000 elements, it might take just 20 steps - this right here is the power of logarithmic time - ti scales gracefully, making it a favourite for algorithms dealing with large datasets. in Data Science, for instance, we can see logarithmic time at play, such as in searching for a specific value in a sorted dataset using binary search:</p>
<pre><code class="lang-python"> <span class="hljs-keyword">import</span> bisect
 index = bisect.bisect_left(sorted_list, target)
</code></pre>
</li>
<li><p><strong>Exponential Time: O(2^n)</strong></p>
<p> Exponential time is the real villain of algorithm efficiency, and trust me you don’t want to mess with it, especially when dealing with streams of data! These algorithms grows so quickly that even for a small increase, the input size can lead to astronomically high runtimes. They are usually a last resort for problems one can’t solve efficiently. A good example would trying to solve the <a target="_blank" href="https://www.lancaster.ac.uk/stor-i-student-sites/lidia-andre/2021/03/30/tower-hanoi/#:~:text=The%20Tower%20of%20Hanoi%20problem,one%20disc%20at%20a%20time">Tower of Hanoi</a> problem recursively - it is like trying to compound your efforts: for each additional disk, the work required doubles, leading to unnecessary exponential growth. The solution might feel elegant in theory, but in practice, it can quickly become computationally infeasible.</p>
<p> <strong>Tower of Hanoi: The Exponential Monster</strong></p>
<p> The <strong>Tower of Hanoi</strong> problem involves moving nnn disks from one rod to another, following these rules:</p>
<ol>
<li><p>Only one disk can be moved at a time.</p>
</li>
<li><p>A larger disk cannot be placed on top of a smaller disk.</p>
</li>
<li><p>An auxiliary rod can be used as a temporary holding space.</p>
</li>
</ol>
</li>
</ol>
<p>    For nnn disks, the minimum number of moves required is 2n−12^n - 12n−1. Here’s how it grows:</p>
<ul>
<li><p>For 3 disks: 2³− 1 = 7 moves.</p>
</li>
<li><p>For 10 disks 2^10 - 1 = 1023 moves</p>
</li>
<li><p>For 20 disks: 2²0 - 1 = 1048575 moves</p>
</li>
</ul>
<pre><code class="lang-python">    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">tower_of_hanoi</span>(<span class="hljs-params">n, source, target, auxiliary</span>):</span>
        <span class="hljs-keyword">if</span> n == <span class="hljs-number">1</span>:
            print(<span class="hljs-string">f"Move disk 1 from <span class="hljs-subst">{source}</span> to <span class="hljs-subst">{target}</span>"</span>)
            <span class="hljs-keyword">return</span>
        tower_of_hanoi(n - <span class="hljs-number">1</span>, source, auxiliary, target)
        print(<span class="hljs-string">f"Move disk <span class="hljs-subst">{n}</span> from <span class="hljs-subst">{source}</span> to <span class="hljs-subst">{target}</span>"</span>)
        tower_of_hanoi(n - <span class="hljs-number">1</span>, auxiliary, target, source)  <span class="hljs-comment"># O(2^n)</span>
</code></pre>
<p>    With each additional disk, the runtime doubles, making this algorithm impractical for anything but small inputs. A good real life scenario would be trying to generate all possible transformations of an object in augmented reality, considering every possible rotation and scale for matching against a template in a scene. If you allow 10 possible rotations, 10 scales, and 10 translations, the total combinations explode exponentially, making this an exponentially crazy thing to do! 🙃</p>
<p>    Exponential time algorithms are like pouring petrol on a fire: the problem size grows out of control with each added element. While they may be unavoidable in some theoretical or <a target="_blank" href="https://klu.ai/glossary/np-hardness">NP-hard</a> problems, again, they’re computationally expensive and typically unsuitable for real-world applications.</p>
<ol start="6">
<li><p><strong>Linearithmic Time - O(n log n)</strong></p>
<p> Finally, 𝑂 ( 𝑛 log ⁡ 𝑛 ) is a happy medium between linear and logarithmic time, striking a balance of efficiency for more complex problems. It’s commonly seen in divide-and-conquer algorithms, like merge sort and quick sort. A classic example is merge sort, where the list is repeatedly split in half (logarithmic) and then recombined in sorted order (linear).</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">merge_sort</span>(<span class="hljs-params">arr</span>):</span>
     <span class="hljs-keyword">if</span> len(arr) &gt; <span class="hljs-number">1</span>:
         mid = len(arr) // <span class="hljs-number">2</span>
         left = arr[:mid]
         right = arr[mid:]

         merge_sort(left)
         merge_sort(right)

         i = j = k = <span class="hljs-number">0</span>
         <span class="hljs-keyword">while</span> i &lt; len(left) <span class="hljs-keyword">and</span> j &lt; len(right):
             <span class="hljs-keyword">if</span> left[i] &lt; right[j]:
                 arr[k] = left[i]
                 i += <span class="hljs-number">1</span>
             <span class="hljs-keyword">else</span>:
                 arr[k] = right[j]
                 j += <span class="hljs-number">1</span>
             k += <span class="hljs-number">1</span>

         <span class="hljs-keyword">while</span> i &lt; len(left):
             arr[k] = left[i]
             i += <span class="hljs-number">1</span>
             k += <span class="hljs-number">1</span>

         <span class="hljs-keyword">while</span> j &lt; len(right):
             arr[k] = right[j]
             j += <span class="hljs-number">1</span>
             k += <span class="hljs-number">1</span>

     <span class="hljs-keyword">return</span> arr  <span class="hljs-comment"># O(n log n)</span>
</code></pre>
<p> For a dataset of size 𝑛 n, merge sort splits it into log ⁡ 𝑛 logn levels, with each level requiring 𝑛 n operations to merge. This makes 𝑂 ( 𝑛 log ⁡ 𝑛 ) algorithms highly efficient for sorting and other large-scale tasks.</p>
</li>
</ol>
<p>Although we love fancy and complicated terms in CS, we embrace them for a reason—they embody powerful concepts that tend to guide us in building efficiency and scalable systems. These terms are not academic jargon, they are the very backbone of smart problem-solving.</p>
<p>So, the next time you’re writing code—whether it’s something you plan to push to production, scale to millions of users, showcase as a proof of concept, or highlight in a research paper—pause and think about its computational complexity. Will your algorithm gracefully handle a growing workload? Does it scale efficiently? As your understanding of these intricacies can be the difference between something that works and something that thrives - there is a big difference!</p>
<p>Write smart, scale boldly, and let the science behind your algorithms shine!</p>
]]></content:encoded></item><item><title><![CDATA[Looming Pandemic of Digital Addiction]]></title><description><![CDATA[In the current age of technology, there is a silent pandemic brewing, one not of biological origin, but of digital dependency. The usefulness of smartphones has woven them into our daily life, rendering them indispensable to the current generation. H...]]></description><link>https://samueladebayo.com/looming-pandemic-of-digital-addiction</link><guid isPermaLink="true">https://samueladebayo.com/looming-pandemic-of-digital-addiction</guid><category><![CDATA[Digital Addiction]]></category><category><![CDATA[psychology]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Fri, 12 Apr 2024 12:44:56 GMT</pubDate><content:encoded><![CDATA[<p>In the current age of technology, there is a silent pandemic brewing, one not of biological origin, but of digital dependency. The usefulness of smartphones has woven them into our daily life, rendering them indispensable to the current generation. However, this indispensability comes at a hefty price—a growing epidemic of digital addiction that threatens to engulf society in a way previously unseen.</p>
<p>The phenomenon of digital addiction is not new, yet its scale and impact are escalating rapidly. Smartphones, with their endless applications, instant connectivity, and the lure of social media, have become a constant companion for many, offering both the illusion of connection and the reality of isolation. This paradox lies at the heart of the issue, where the tool designed to connect us to the world also distances us from it. The psychological ramifications of this addiction are profound. From reduced attention spans and disrupted sleep patterns to heightened anxiety and depression, the effects are pervasive. The constant barrage of notifications and the compulsion to remain continually connected disrupt mental peace and personal relationships, leading to a cycle of dependency that is hard to break.</p>
<p>The proposal to establish rehabilitation centers for digital addiction might once have seemed far-fetched, yet it is becoming increasingly necessary. These facilities would not merely serve as a retreat from technology but as centers for relearning the art of living. Through counseling, digital detox programs, and the teaching of mindfulness and social skills, individuals can reclaim their autonomy over technology, rather than being ruled by it.</p>
<p>Addressing this pandemic requires a collective effort. It calls for awareness, education, and proactive measures from all sectors of society. Parents, educators, policymakers, and technology creators must work in tandem to create a balanced digital environment. Teaching digital literacy and fostering environments that encourage face-to-face interactions are crucial steps in this direction.</p>
<p>As we stand on the precipice of this digital pandemic, it is imperative to recognize and act upon the challenges it presents. The establishment of rehabilitation centers, while a necessary measure, is but a part of the solution. The ultimate goal should be to foster a society where technology serves to enhance human interactions, not replace them. By embedding ethical considerations in the design of technology and promoting a culture that values personal connections, we can mitigate the effects of digital addiction.</p>
<p>This impending pandemic of digital addiction is a clarion call to reassess our relationship with technology; a reminder that in our quest to connect digitally, we must not sever our ties with the very essence of human experience—real, tangible interactions. The time to act is now, lest we find ourselves ensnared in the very web we have woven.</p>
]]></content:encoded></item><item><title><![CDATA[Diving into Dynamic Realms: My Journey from 2D to 3D Convolutional Neural Networks]]></title><description><![CDATA[If you would like to dive right into the code please see here
In the ever-evolving landscape of computer vision, the transition from static imagery to the dynamic world of videos marks a significant leap to understanding the dynamicity of our world. ...]]></description><link>https://samueladebayo.com/3dcnn-intro</link><guid isPermaLink="true">https://samueladebayo.com/3dcnn-intro</guid><category><![CDATA[Computer Science]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[CNN]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Tue, 14 Nov 2023 23:06:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1700002795016/63ccc19d-6cfc-4328-a73b-43c86ed80672.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>If you would like to dive right into the code</em> <a target="_blank" href="https://github.com/exponentialR/3DCNN"><em>please see here</em></a></p>
<p>In the ever-evolving landscape of computer vision, the transition from static imagery to the dynamic world of videos marks a significant leap to understanding the dynamicity of our world. As someone who has spent years unravelling the <code>mysteries</code> hidden in static images using 2D Convolutional Neural Networks, I find myself at an exciting juncture in my PhD journey - diving into the spatio-temporal context. The shift from analyzing still frames to understanding the intricate sequences of video data is not just a step forward in complexity, but a step towards the realm brimming with untapped potential and unexplored challenges. My exploration into this domain is driven by a simple yet profound realization- our world is not static. It is a dynamic tapestry where each moment is a continuation of the last, and a stroy unfolding in time.</p>
<p>In my previous work, 2DCNNs served as a powerful tool, adept at capturing spatial hierarchies and patterns within images, exploring the intricate relationship between pixels, and encoding subtle patterns via edges and corners. However, as I delve into video data, I find myself in need of a more sophisticated ally - one capable of understanding not just the spatial but also the temporal nuances of visual data. This is exactly where 3D Convolutional Neural Networks (3D CNNs) enter the picture.</p>
<p>My shift to 3D CNNs is more than just an academic interest; it is a journey towards a deeper understanding of how we can enable machines to perceive and interpret the world in its full dynamism, our stochasticity, and uncertainties even in seconds of actions- much like we do. Every video clip is a symphony of motions, emotions, and interactions, with multilayers of subtle meanings- and of course 3D CNNs promise to be the key to deciphering these complex sequences. As I embark on this journey, I am not just to expand the boundaries of my knowledge, but also to contribute to the broader field of computer vision, pushing towards systems that can understand and interact with the world in richer, more meaningful ways.</p>
<p>In subsequent blog posts, I invite you with me through the explorations of 3DCNNs - from the core concepts that distinguish them from their 2D counterparts to the intricate challenges and learning curves I have encountered while applying them to video data. Whether you are a seasoned expert in the field, a beginner, a grad student, or a curious onlooker, I hope to offer insights and experiences that resonate with this domain.</p>
<h3 id="heading-background-and-core-concepts"><strong>Background and Core Concepts</strong></h3>
<h4 id="heading-the-evolution-from-2d-to-3d-cnns">The Evolution from 2D to 3D CNNs</h4>
<p><strong>Understanding CNNs</strong>: Convolutional Neural Networks (CNNs) have been the cornerstone of image analysis in computer vision for years. Traditional 2D CNNs are adept at processing static images—learning spatial hierarchies and patterns by applying filters that capture various aspects of the image, such as edges, textures, and shapes. If you would like to find out more about 2D CNN, please refer to my <a target="_blank" href="https://github.com/exponentialR/SamuelAdebayo/tree/main/ML-Slides">slides and labs here</a></p>
<p><strong>Limitation in Capturing Temporal Information</strong>: While 2D CNNs excel in spatial understanding, they fall short in comprehending temporal dynamics, which is crucial when dealing with video data. Videos are essentially sequences of frames, where each frame is tied to its predecessor and successor, creating a temporal continuity that 2D CNNs cannot capture.</p>
<h4 id="heading-the-emergence-of-3d-cnns">The Emergence of 3D CNNs</h4>
<p><strong>Introduction to 3D CNNs</strong>: This is where 3D Convolutional Neural Networks change the game. Unlike their 2D counterparts, 3D CNNs are designed to understand both spatial and temporal features. They achieve this by adding an additional dimension—time—to the convolutional process.</p>
<p><strong>How 3D CNNs Work</strong>: In a 3D CNN, the convolutional filters extend along three dimensions—height, width, and depth (time). This allows the network to not only learn from the spatial content of each frame but also gain insights into the motion and changes occurring across frames. As a result, 3D CNNs can unravel the complex tapestry of actions and interactions in video sequences.</p>
<h4 id="heading-applications-of-3d-cnns">Applications of 3D CNNs</h4>
<p><strong>Beyond Static Frames</strong>: The ability of 3D CNNs to interpret time makes them incredibly powerful for a range of applications. This includes action recognition in videos, where understanding the sequence of movements is key, and medical imaging, where temporal changes in 3D scans can indicate crucial health information. In each of these areas, 3D CNNs offer a more comprehensive understanding by considering the evolution of visual data over time.</p>
<p><strong>Challenges and Opportunities</strong>: The shift to 3D CNNs, however, is not without its challenges. The addition of the temporal dimension increases the computational complexity significantly. Additionally, training 3D CNNs requires not only larger datasets but also datasets that accurately represent temporal variations.</p>
<h3 id="heading-my-research-odyssey-with-3d-cnns"><strong>My Research Odyssey with 3D CNNs</strong></h3>
<h4 id="heading-transitioning-to-spatio-temporal-analysis">Transitioning to Spatio-Temporal Analysis</h4>
<p><strong>Initial Exploration</strong>: My journey into 3D CNNs began as an extension of my work with 2D CNNs, where I had focused on spatial feature extraction from static images. The transition to 3D CNNs marked a significant shift towards integrating the temporal dimension. My initial challenge lay in comprehending the intricacies of 3D convolutional layers – understanding how they extend the spatial interpretation of 2D CNNs to include temporal relationships.</p>
<p>The architectural nuances of 3D CNNs, such as the incorporation of time as a third dimension in convolutional operations, presented both a conceptual and practical learning curve. This was not merely about adapting to a new technique but rethinking the approach to data representation and processing.</p>
<h4 id="heading-navigating-data-complexity">Navigating Data Complexity</h4>
<p><strong>Data Preprocessing and Management</strong>: One of the most formidable challenges I faced was the preprocessing of video data. Unlike static images, video data comes with additional complexities like variable frame rates, diverse resolutions, and most crucially, a substantial increase in data volume. Developing an efficient preprocessing pipeline that could handle such diversity and volume was paramount. This involved not only frame extraction and resizing but also temporal sampling strategies to capture relevant motion information without overburdening the computational process.</p>
<p><strong>Architectural Design and Computational Considerations</strong>: Designing the architecture of a 3D CNN requires a delicate balance. The model had to be sophisticated enough to capture intricate temporal patterns without becoming computationally infeasible. This entailed an iterative process of model design, where each layer's parameters were carefully calibrated to maximize learning while minimizing computational costs. The extended training durations and heightened resource demands of 3D CNNs necessitated a more strategic approach, leveraging distributed computing and optimizing algorithms for efficiency.</p>
<h4 id="heading-gleaning-insights-and-developing-solutions">Gleaning Insights and Developing Solutions</h4>
<p><strong>Performance Optimization</strong>: In pursuit of optimal performance, I explored a variety of architectural tweaks and parameter adjustments. Strategies such as modifying stride and kernel size in convolutional layers, and incorporating advanced techniques like transfer learning, played a crucial role in surmounting the limitations imposed by the sheer scale of video data.</p>
<p><strong>Combating Overfitting</strong>: The increased parameter count in 3D CNNs heightened the risk of overfitting. To mitigate this, I implemented a combination of regularization strategies, data augmentation techniques, and dropout layers. These measures were critical in ensuring that the model generalized well, rather than merely memorizing the training data.</p>
<h4 id="heading-reflecting-on-the-journey">Reflecting on the Journey</h4>
<p>Working with 3D CNNs reinforced the virtue of patience. The field of 3D convolutional analysis is still burgeoning, with much left to explore and understand. Navigating this terrain often required an iterative, trial-and-error approach, underscoring the importance of resilience in research.</p>
<h3 id="heading-charting-future-pathways"><strong>Charting Future Pathways</strong></h3>
<h4 id="heading-advancing-3d-cnn-research">Advancing 3D CNN Research</h4>
<p><strong>Harnessing Technological Growth</strong>: As computational capabilities continue to advance and datasets grow both in size and complexity, the potential applications of 3D CNNs are set to broaden significantly. I am particularly intrigued by the prospects in domains like augmented reality, where interpreting both spatial and temporal information is key to creating immersive experiences.</p>
<p><strong>Ongoing Exploration</strong>: My foray into 3D CNNs is an ongoing chapter in my academic journey. I'm keen to delve deeper into novel architectures and apply these models across a wider spectrum of applications. The ultimate goal is to push the frontiers of computer vision and contribute to the development of systems that can interact with our dynamic world more intelligently and intuitively.</p>
<h3 id="heading-sneak-peek-into-the-next-blog-post"><strong>Sneak Peek into the Next Blog Post</strong></h3>
<p><strong>Intuition, Mathematics, Code: A Technical Deep Dive into 3D CNNs</strong></p>
<p>In my upcoming blog post, we'll take a technical deep dive into the world of 3D Convolutional Neural Networks. I'll unravel the intuition behind these sophisticated models, illuminating how they interpret not just the visual cues in static images but also the temporal dynamics in videos. We'll delve into the mathematics that underpins these networks, demystifying how they learn and process information across both space and time. Expect to see detailed discussions on model architecture, accompanied by snippets of code that bring these concepts to life. Whether you're keen on understanding the nuts and bolts of 3D convolution operations or interested in the practical aspects of implementing these models in PyTorch, the next post promises to be a treasure trove of insights.</p>
<p>From discussing the nuances of kernel size and stride in 3D convolutions to exploring strategies for optimizing network performance, we will cover a spectrum of topics that will cater to both beginners and seasoned practitioners in the field. The goal is to provide you with a comprehensive understanding of 3D CNNs that balance theoretical depth with practical applicability. So, stay tuned for an enriching journey into the technical heart of 3D CNNs!</p>
<p><a target="_blank" href="https://github.com/exponentialR/3DCNN">To take a sneek peak into an experimental 3D CNN Architecture please check here</a></p>
]]></content:encoded></item><item><title><![CDATA[Camera Calibration Demystified: Part 2 - Applications and Lens Distortion]]></title><description><![CDATA[Introduction
In Part 1 of this series on camera calibration, we laid the groundwork by exploring the fundamental principles that govern how cameras translate the 3D world into a 2D image. We delved into camera models and the intrinsic and extrinsic p...]]></description><link>https://samueladebayo.com/camera-calibration-demystified-part-2-applications-and-lens-distortion</link><guid isPermaLink="true">https://samueladebayo.com/camera-calibration-demystified-part-2-applications-and-lens-distortion</guid><category><![CDATA[Computer Vision]]></category><category><![CDATA[Camera Calibration]]></category><category><![CDATA[Mathematics]]></category><category><![CDATA[Python]]></category><category><![CDATA[opencv]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Sun, 22 Oct 2023 19:13:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698002967308/c5c48c3b-2ac7-4b57-9b25-8f22d775a56b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>In <a target="_blank" href="https://samueladebayo.com/camera-calibration-part-1">Part 1 of this series on camera calibration</a>, we laid the groundwork by exploring the fundamental principles that govern how cameras translate the 3D world into a 2D image. We delved into camera models and the intrinsic and extrinsic parameters that play a vital role in this transformation. But that was merely scratching the surface.</p>
<p>In this second instalment, I'm going to broaden the scope significantly. We'll venture into the critical importance of camera calibration across various real-world applications—from robotics to autonomous vehicles and even the arts. We'll also uncover the lens distortions that could potentially mar your images and then look at the mathematical equations behind them.</p>
<p>So if you've ever wondered how self-driving cars make sense of their environment, how augmented reality applications manage to superimpose digital elements so naturally, or even questioned the mechanics behind your DSLR's crisp photos, you're in for a treat.</p>
<h3 id="heading-reasons-for-calibration">Reasons for Calibration</h3>
<p>The importance of camera calibration extends far beyond academic interest—it plays a critical role in various real-world applications. I'll investigate why camera calibration is indispensable in key areas in this section.</p>
<h4 id="heading-1-robotics-and-automation">1. Robotics and Automation</h4>
<p>In robotics, precision is not just a nice-to-have - it is the name of the game. Whether the robots are on bustling factory floors or those designed to help people in their homes- these machines will have to 'know' what is around them and where exactly it is located. This is even more true for robots rocking machine perception tech, which allows them to interpret and make sense of their surrounding. Getting the camera calibration right in settings like these is often a big deal.</p>
<p>Take a factory assembly line, for example. Robots are often kitted out with cameras and machine perception algorithms to identify parts or objects. Mess up the camera calibration, and you're in for a world of trouble. Imagine a robot misjudging the position of a piece it's supposed to pick. That's the kind of error that can start a chain reaction of problems. This is not just about assembly lines or specific tasks, either. Suppose a robot is to pick up an item and place it somewhere specific - a well-calibrated camera ensures that the robot's actions are spot-on with what it is seeing. This is not limited to the task at hand but to the robot's ability to navigate more complex situations. Think about it: a finely calibrated camera can act like a robot's "sixth sense", allowing for on-the-fly adjustments during the job.</p>
<p>To sum it up, nailing camera calibration in robotics and automation isn't just a good practice; it is a must. Whether for aiding complex tasks or helping a robot safely navigate an unstructured environment, getting the camera settings right can either make or break the whole operation.</p>
<h4 id="heading-2-autonomous-vehicles">2. Autonomous Vehicles</h4>
<p>We are on the brink of a game-changer - self-driving cars are about to become a common sight on our roads. But let us not forget, the tech making this possible is anything but simple. At the core, we have advanced vision systems that let these vehicles 'see the world around them. However, is not always enough; these systems must also be spot-on when interpreting this visual data for real-time decision-making. This is precisely where camera calibration comes in and becomes a critical piece of the puzzle.</p>
<p>For a minute, think about the challenges of driving autonomously. Cars must navigate a world filled with other vehicles, pedestrians, and other unpredictable elements. Oh, get the camera calibration wrong, and you are asking for trouble. Results of miscalibration? We are discussing potentially misjudging the distance to the car in front, which could translate to insufficient time to brake or even a full-on collision.</p>
<p>Here is the kicker: autonomous cars rely on many machine vision tasks - such as detecting obstacles, understanding road signs, or even interpreting road markings. Many of these cars would require more than one camera, each serving a specific purpose. Hence, calibrating each camera is not a one-off job- it is about ensuring all these cameras work harmoniously.</p>
<h4 id="heading-3-augmented-and-virtual-reality">3. Augmented and Virtual Reality</h4>
<p>Okay, let's talk AR and VR. These are realms where the line between the digital and the real world gets blurry. Whether overlaying virtual furniture in your real living room or immersing yourself in a completely digital world, the experience has to feel real. That's why camera calibration is a big deal in AR and VR tech.</p>
<p>Think about it. You put on a VR headset and step into a virtual world. You move your head, and the perspective changes perfectly in sync. That's not magic—it's precise calibration. If the camera's off even by a little, you might start to feel motion sickness or have a subpar experience. That's the last thing you want when battling space pirates or exploring a virtual museum.</p>
<p>Now, switch gears to AR. Imagine using an app on your smartphone to visualize how a new sofa would look in your living room. The app has to blend digital objects with the real world smoothly. If the camera calibration is off, that sofa might look like it's floating in mid-air or sinking into the floor. Not the best way to make a buying decision, right?</p>
<p>And let's not forget about more advanced applications. For example, getting the camera calibration wrong could be life and death in medical AR. Surgeons often use AR tech for guided procedures. In scenarios like this, the calibration needs to be absolutely spot-on for accurate guidance and successful outcomes.</p>
<p>So, all in all, whether you're gaming, shopping, or even performing surgery, camera calibration in AR and VR isn't just about enhancing the experience—it's about making it possible in the first place.</p>
<h4 id="heading-4-film-and-photography">4. Film and Photography</h4>
<p>Let's get into film and photography, where camera calibration isn't just about the tech—it's also about the art. In settings that demand a heavy dose of scientific rigor, like wildlife documentaries or high-speed sports action, getting your camera settings right is non-negotiable. Picture this: you're shooting a documentary on migratory birds. A well-calibrated camera lets you capture beautiful shots and accurate data on how fast and high these birds fly. That's adding a layer of scientific credibility to your storytelling.</p>
<p>But hey, it's not all about the numbers. Camera calibration also plays a starring role in the artistic side of things. Take landscape photography, for instance. You want those mountain ranges and valleys to look as majestic in the photo as they do in real life. A calibrated camera ensures that the proportions and spatial relationships within the frame are just right, enhancing your shots' emotional impact and narrative quality.</p>
<p>And let's not forget the controlled chaos of a studio setting. Calibration is your best friend, whether you're doing product photography, snapping high-fashion looks, or capturing fine art reproductions. In essence, camera calibration in film and photography is more than a behind-the-scenes technicality; it's a linchpin that can elevate your work from good to great. It's not just about getting the colour balance or the focus right; it's about capturing the subject's soul, be it a fast-paced sporting event or a still life. When your camera is finely tuned, your work speaks volumes—conveying scientific facts or evoking deep emotions.</p>
<h3 id="heading-types-of-distortion">Types of Distortion</h3>
<p>Regarding camera calibration, addressing distortions is not just a side quest - it is the main objective. Distortions are discrepancies between the captured image and the real-world scene, affecting the accuracy of the camera's representations. Distortions can be characterised as deviations from the ideal imaging model, where rays from a single point in three-dimensional space converge at a single point on the imaging sensor. Numerous factors contribute to distortions, including lens shape, refractive index variations, and manufacturing imperfections. These distortions have various types, each with a mathematical model and correction method. Understanding these distortions is pivotal for calibrating the camera to achieve high accuracy in multiple applications.</p>
<h4 id="heading-a-radial-distortions">A. Radial Distortions</h4>
<p><strong> 1. Barrel Distortions:</strong> Barrel distortion is a sub-type of radial distortion, where straight lines appear to curve outward from the centre of the image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697994379079/90d52331-74c4-4ae4-aa76-f363b8e7dfdf.png" alt class="image--center mx-auto" /></p>
<p>The image magnification decreases with distance from the optical axis. This causes straight lines near the edge of the field to bend inward, resembling the shape of a barrel. This type of distortion is common in wide-angle lenses.</p>
<p><strong> 2. Pincushion Distortion:</strong> Conversely, image magnification increases with the distance from the optical axis in pincushion distortion. The result is that straight lines bend outward from the centre, akin to a pincushion</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697994279778/0eda4134-2def-4466-a282-d03371951dcc.png" alt class="image--center mx-auto" /></p>
<p>Mathematically, a unified model can represent both barrel and pincushion distortions, often employing higher-order polynomials, which is particularly useful when working with more complicated lens systems. The general formula is:</p>
<p>$$\begin{align*} x' &amp;= x\left(1 + k_1 r^2 + k_2 r^4 + \ldots\right) \\ y' &amp;= y\left(1 + k_1 r^2 + k_2 r^4 + \ldots\right) \end{align*}$$</p><p>Here, (<strong><em>x</em></strong>, <strong><em>y</em></strong>) are the original coordinates, (<strong><em>x'</em></strong>, <strong><em>y'</em></strong>) are the distorted coordinates, <strong><em>k<sub>1</sub>, k<sub>2</sub></em></strong>, ... are the distortion coefficients, and r is the radial distance from the centre of the image, calculated as:</p>
<p>$$r = \sqrt{x^2 + y^2}$$</p><p>In this general model:</p>
<ul>
<li><p>A positive <strong><em>k<sub>1</sub></em></strong> will produce pincushion distortion, as lines will curve outward.</p>
</li>
<li><p>A negative ***k<sub>1</sub>***​ will produce barrel distortion, where lines curve inward towards the centre.</p>
</li>
<li><p>Higher-order terms like ***k<sub>2</sub>***​ allow for more complex distortion patterns, which might be observed in higher-end or more flawed lens systems.</p>
</li>
</ul>
<p>The model is extendable to as many terms as necessary, but in practice, most systems are sufficiently modelled using just <strong><em>k<sub>1</sub></em></strong> and sometimes <strong><em>k<sub>2</sub></em></strong>.</p>
<h4 id="heading-b-tangential-distortions">B. Tangential Distortions</h4>
<p>These distortions occur when the lens and the imaging plane are not parallel. Tangential distortions usually shift the image in a direction orthogonal to the radial distortions and can make the image look tilted or skewed. While radial distortions affect the image in a radially outward direction from the centre, tangential distortions act orthogonally to them. This means that they can make the image appear tilted or skewed, effectively moving the distorted image points horizontally and vertically in a way unrelated to their distance from the optical axis.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697995709722/f516bc89-ba0d-4105-b706-e7b8ae0d9bda.png" alt class="image--center mx-auto" /></p>
<p>Mathematically, tangential distortion can be expressed as:</p>
<p>$$\begin{align*} x' &amp;= x + \left(2p_1 xy + p_2 (r^2 + 2x^2)\right) \\ y' &amp;= y + \left(p_1 (r^2 + 2y^2) + 2p_2 xy\right) \end{align*}$$</p><p>Here, <strong><em>x'</em></strong> and <strong><em>y'</em></strong> are the distorted coordinates. <strong><em>x</em></strong> and <strong><em>y</em></strong> are the original coordinates, and <strong><em>r</em></strong> is the radial distance from the origin, calculated as <strong><em>r</em></strong> = <strong><em>√(x<sup>2</sup> + y<sup>2</sup>)</em></strong>.</p>
<p>The coefficients <strong><em>p<sub>1</sub></em></strong> and <strong><em>p<sub>2</sub></em></strong> are the tangential distortion coefficients. These terms aim to correct the tilt in the lens and bring the captured image closer to what would be charged if the lens were perfectly aligned. By adjusting the coefficients during the camera calibration process, one can minimize the effects of tangential distortions.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>In this second instalment, we've delved deeper into the reasons for camera calibration across various industries, touched on different types of distortions, and hinted at the mathematics involved. However, we've only scratched the surface. In Part 3, we'll dive into the heart of the mathematics that makes accurate camera calibration possible. From optimization problems to factoring in distortions, we'll explore how all these elements combine to create a robust camera model. Stay tuned!</p>
<h3 id="heading-references">References</h3>
<p>For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:</p>
<p>[1] <a target="_blank" href="https://github.com/exponentialR/SamuelAdebayo/tree/main/CameraCalibration">Codes for distortion plots</a></p>
<p>[2] Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman</p>
<p>[2] <a target="_blank" href="https://web.stanford.edu/class/cs231a/course_notes/01-camera-models.pdf"><strong>Stanford CS231A: Camera Models</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Retrospective: Teaching Intro to Python Programming]]></title><description><![CDATA[Hello, everyone! I had the privilege of teaching a programming class "Python Programming" course at Belfast Metropolitan College during Autumn 2022 and Winter of 2023. As we move into Autumn, I've decided to share the lecture slides and occasionally ...]]></description><link>https://samueladebayo.com/retrospective-teaching-intro-to-python-programming</link><guid isPermaLink="true">https://samueladebayo.com/retrospective-teaching-intro-to-python-programming</guid><category><![CDATA[Python]]></category><category><![CDATA[Learning Journey]]></category><category><![CDATA[learn coding]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Mon, 16 Oct 2023 05:03:40 GMT</pubDate><content:encoded><![CDATA[<p>Hello, everyone! I had the privilege of teaching a programming class "Python Programming" course at Belfast Metropolitan College during Autumn 2022 and Winter of 2023. As we move into Autumn, I've decided to share the lecture slides and occasionally the recorded classes from this course every week.</p>
<p><a target="_blank" href="https://github.com/exponentialR/SamuelAdebayo/blob/main/Week1%20-%20Introduction.pdf">Week 1 Lecture Slides here</a></p>
<h4 id="heading-why-share-now">Why Share Now?</h4>
<p>Sharing educational resources has always been a way to democratise knowledge. Whether you're a student who took the course and wants to revisit the material or someone who's just getting started with Python, these resources will serve as a comprehensive guide.</p>
<h4 id="heading-what-did-week-1-cover">What Did Week 1 Cover?</h4>
<p>To give you a taste of what's to come, week one was all about laying the foundation:</p>
<h5 id="heading-module-introduction-we-discussed-what-the-course-aimed-to-achieve-and-why-python-is-an-invaluable-language-to-learn">Module Introduction: We discussed what the course aimed to achieve and why Python is an invaluable language to learn.</h5>
<h5 id="heading-introduction-to-computer-programming-the-course-kicked-off-by-laying-down-the-basics-of-computer-programming-and-its-relevance-in-todays-digital-landscape">Introduction to Computer Programming: The course kicked off by laying down the basics of computer programming and its relevance in today's digital landscape.</h5>
<h5 id="heading-programming-basics-students-were-introduced-to-the-fundamental-building-blocks-of-all-programming-languages">Programming Basics: Students were introduced to the fundamental building blocks of all programming languages.</h5>
<h5 id="heading-natural-language-vs-programming-language-a-comparative-look-at-how-our-everyday-language-differs-from-programming-languages-and-why-that-matters">Natural Language vs Programming Language: A comparative look at how our everyday language differs from programming languages and why that matters.</h5>
<p>Translators, Compilers, and Assemblers: An overview of the tools that make coding in Python possible and how they work.</p>
<h4 id="heading-what-to-expect">What to Expect?</h4>
<p>Each week, I'll post the slides corresponding to that week's topics. Occasionally, I will also share the recorded lectures for those who prefer a more interactive learning experience.</p>
<p>Whether you're a beginner in Python or looking to refresh your knowledge, stay tuned for weekly updates that will take you from the basics to more advanced topics. Don't forget to check back each week for new materials, and happy learning!</p>
]]></content:encoded></item><item><title><![CDATA[Camera Calibration Demystified: Part 1 - Fundamentals and Models]]></title><description><![CDATA[Introduction
Imagine you're taking a photo of a building with your smartphone. You might notice that the lines of the building don't appear as straight as they do in real life, or the proportions seem slightly off. These are distortions, and discrepa...]]></description><link>https://samueladebayo.com/camera-calibration-part-1</link><guid isPermaLink="true">https://samueladebayo.com/camera-calibration-part-1</guid><category><![CDATA[Computer Vision]]></category><category><![CDATA[Mathematics]]></category><category><![CDATA[camera]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Sun, 01 Oct 2023 13:19:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1696165838827/5dab2b8a-fa43-4d91-b987-10d2c0159abf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>Imagine you're taking a photo of a building with your smartphone. You might notice that the lines of the building don't appear as straight as they do in real life, or the proportions seem slightly off. These are distortions, and discrepancies between the real-world objects and their representations in the image. Such distortions often occur due to the inherent limitations of camera lenses and sensors as they attempt to map a 3D world onto a 2D plane.</p>
<p>Camera calibration is the technique used to understand and correct these distortions. It's a fundamental process for achieving more accurate visual representations, especially in applications like augmented reality, robotics, and 3D reconstruction. In this first part of our series on camera calibration, we'll explore the foundational concepts and models that serve as the backbone of this technique. We'll delve into the intrinsic and extrinsic parameters that influence how a camera captures an image and discuss how these parameters can be determined to correct distortions. By the end of this post, you'll have a solid understanding of the principles behind camera calibration and its importance in various domains.</p>
<h3 id="heading-camera-models">Camera Models</h3>
<p>To understand the intricacies of camera imaging, it's useful to connect the dots with real-world applications. Take the example of a self-driving car, which relies on its camera to accurately gauge the dimensions and distances of surrounding elements like pedestrians, other vehicles, and road signs. Just as understanding the human eye's perception aids in comprehending our interaction with the 3D world, grasping the mechanics of a camera model enhances the precision of such measurements in automated systems. To unpack this further, let's engage in a thought experiment: envision a simple setup (See figure 1) where a small barrier with a pinhole is placed between a 3D object and a film. Light rays from the object pass through the pinhole to create an image on the film. This basic mechanism serves as the cornerstone for what is known as the pinhole camera model, a foundational concept that allows us to fine-tune the way cameras, like the one in a self-driving car, interpret the world.</p>
<h4 id="heading-pinhole-camera-model">Pinhole Camera Model</h4>
<h5 id="heading-the-setup"><strong>The setup</strong></h5>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162002489/2dc69ea2-1d6f-4ebf-9a41-4705332915d1.png" alt class="image--center mx-auto" /></p>
<center>Figure 1: The Pinhole Camera model [2]</center>

<p>In the pinhole model, consider a 3D coordinate system defined by unit vectors <strong><em>i</em></strong>, <strong><em>j</em></strong>, <strong><em>k</em></strong>. Place an object point P with coordinates (<strong><em>X, Y, Z</em></strong>) in this world. The camera's aperture is at the origin <strong>O</strong>, and the image plane (or film) is parallel to the i-j plane at a distance f along the k-axis. The film has a centre <strong><em>C'</em></strong>, and the projection of <strong><em>P</em></strong> onto the film is <strong><em>P'</em></strong> with 2D coordinates (<strong><em>x, y</em></strong>) - (Refer to Figure 1 and 2)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162197761/f7c4f796-f04f-447d-9d58-d8a717da1edd.png" alt class="image--center mx-auto" /></p>
<center>Figure 2: A formal construction of the Pinhole Camera model [2]</center>

<h5 id="heading-the-mathematics"><strong>The Mathematics</strong></h5>
<p>To relate <strong><em>P</em></strong> and <strong><em>P'</em></strong>, we draw a line from <strong><em>P</em></strong> through the aperture <strong><em>O</em></strong>, intersecting the film at <strong><em>P'</em></strong>. The triangles <strong><em>POP'</em></strong> and <strong><em>OCP'</em></strong> are similar, which gives us:</p>
<p>$$\frac{x}{f} = \frac{X}{Z} \quad \text{and} \quad \frac{y}{f} = \frac{Y}{Z}$$</p><p>Solving for <strong><em>x</em></strong> and <strong><em>y</em></strong>, we get:</p>
<p>$$\begin{align*} x &amp;= f \left( \frac{X}{Z} \right), \\ y &amp;= f \left( \frac{Y}{Z} \right). \end{align*}$$</p><p>Here, f represents the focal length of the Camera.</p>
<h4 id="heading-lens-models">Lens Models</h4>
<p>While the pinhole model gives us an idealized perspective of image formation, real-world cameras use lenses to focus light. Lenses introduce additional complexities due to their shape, material, and how they bend light rays. These models account for additional factors like focal length, aperture, and lens distortions. Let's explore lens models to understand these intricacies.</p>
<h5 id="heading-the-setup-1"><strong>The Setup</strong></h5>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162656186/1a69c633-2f3e-47f6-a2d8-0acc5ac0c4c0.png" alt class="image--center mx-auto" /></p>
<center>Figure 3: The Simple lens model [2]</center>

<p>Like the pinhole model, lens models use a 3D coordinate system defined by <strong><em>i</em></strong>, <strong><em>j</em></strong>, <strong><em>k</em></strong>. However, instead of a pinhole at <strong><em>O</em></strong>, we have a lens. The image plane is still at a distance f along the <strong><em>k-axis</em></strong>, we denote the centre of this plane as <strong><em>C'</em></strong> (Refer to Figure 3 and 4)</p>
<h5 id="heading-the-mathematics-1"><strong>The Mathematics</strong></h5>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162458374/87f2a71a-1d96-493b-9c16-a7cb1659c7ae.png" alt class="image--center mx-auto" /></p>
<center>Figure 4: The Simple lens model: Relationship between points in the focal plane and the real world (3D)[2]</center>

<p>In lens models, we need to account for distortions introduced by the lens. These distortions are typically represented by <strong>d<sub>x</sub></strong> and <strong>d<sub>y</sub></strong>, affecting the x and y coordinates, respectively. The equations for x and y in lens models are:</p>
<p>$$\begin{align*} x &amp;= f \left( \frac{X}{Z} \right) + d_x, \\ y &amp;= f \left( \frac{Y}{Z} \right) + d_y. \end{align*}$$</p><p>In these equations, dx and dy are functions of <strong><em>X</em></strong>, <strong><em>Y</em></strong>, and <strong><em>Z</em></strong> and they represent the distortions introduced by the lens.</p>
<h3 id="heading-intrinsic-and-extrinsic-parameters">Intrinsic and Extrinsic Parameters</h3>
<p>So far, we've discussed the basic models that describe how cameras work and how they capture the 3D world onto a 2D plane. These models give us a high-level view but are generalized and often idealized. In practice, each camera has its unique characteristics that influence how it captures an image. These characteristics are captured by what are known as <strong>intrinsic</strong> and <strong>extrinsic</strong> parameters. While intrinsic parameters deal with the camera's own 'personality' or 'DNA' extrinsic parameters describe how the camera is positioned in space. Together, they offer a complete picture of a camera's behaviour, which is crucial for applications like 3D reconstruction, augmented reality, and robotics.</p>
<h4 id="heading-intrinsic-parameters">Intrinsic Parameters</h4>
<p>After Understanding the broad overview of intrinsic and extrinsic parameters, let's zoom in on the intrinsic parameters first. These parameters are unique to each camera and provide insights into how it captures images. While these parameters are generally considered constants for a specific camera, it is important to note that they can sometimes change. For instance, in cameras with variable focal lengths or adjustable sensors, intrinsic parameters can vary.</p>
<ol>
<li><h4 id="heading-optical-axis">Optical Axis</h4>
<p> The optical axis is essentially the line along which light travels into the camera to hit the sensor. In the idealized pinhole and lens models, it's the line that passes through the aperture (or lens centre) and intersects the image plane. It serves as a reference line for other measurements and parameters.</p>
</li>
<li><p><strong>Focal Length</strong> (<em>f</em> ): This is the distance between the lens and the image sensor. Knowing the focal length is crucial for estimating the distances and sizes of objects in images. It's also a key factor in determining the field of view and is usually represented in pixels.</p>
</li>
</ol>
<p>$$f = \alpha \times \text{sensor size} ,$$</p><p>$$\text{Here}, \alpha \space \text{is a constant that relates the physical sensor size to the size in pixels}$$</p><ol>
<li><strong>Principal Point</strong> ((<strong>c<sub>x</sub></strong>, <strong>c<sub>y</sub></strong>))<strong>:</strong> This is the point on the image plane where the optical axis intersects, it often lies near the centre of the image. it is crucial for tasks like image alignment and panorama stitching.</li>
</ol>
<p>$$\begin{align*} c_x &amp;= \frac{Image Width}{2},\\ \\ c_y &amp;= \frac{Image Height}{2}. \end{align*}$$</p><ol>
<li><strong>Skew Coefficient</strong> <strong>(s)</strong>: This parameter is responsible for any angle between the x and y pixel axes of the image plane. It is rarely encountered in modern-day cameras.</li>
</ol>
<p>$$s = 0 \quad \text{(usually)}$$</p><p>The intrinsic matrix denoted by <strong><em>K</em></strong> consolidates these parameters:</p>
<p>$$K = \begin{pmatrix} f_x &amp; s &amp; c_x \\ 0 &amp; f_y &amp; c_y \\ 0 &amp; 0 &amp; 1 \end{pmatrix}$$</p><h5 id="heading-note-on-constancy"><strong>Note on Constancy</strong></h5>
<p>Although intrinsic parameters like the focal length and principal point are often treated as constants, especially in fixed or pre-calibrated camera setups, they can often change based on specific hardware configurations. For example, the focal length will vary in cameras with zoom capabilities. Therefore, in such special cases, recalibration may be necessary.</p>
<h5 id="heading-camera-with-zoom-capabilities"><strong>Camera with Zoom Capabilities</strong></h5>
<p>Cameras with zoom capabilities introduce an additional layer of complexity to the calibration process. While Zooming allows for better framing or focus on specific areas, it also changes intrinsic parameters like the focal length. This section will explore how to handle calibration in scenarios involving zoom.</p>
<p><strong><em>Calibration at Specific Zoom Levels</em></strong></p>
<p>When you calibrate a camera at a particular zoom level, the resulting intrinsic parameters are only accurate for that setting. If you continue to record or capture images at the same zoom level, these calibration parameters will remain valid.</p>
<p>$$K_{\text{zoom}} = \begin{pmatrix} f_{\text{zoom}} &amp; s &amp; c_x \\ 0 &amp; f_{\text{zoom}} &amp; c_y \\ 0 &amp; 0 &amp; 1 \end{pmatrix}$$</p><p>Here, <strong><em>K<sub>zoom</sub></em></strong> and <strong><em>f<sub>zoom</sub></em></strong> represent the camera matrix and focal length at the specific zoom level, respectively.</p>
<h4 id="heading-handling-zoom-changes">Handling Zoom Changes</h4>
<p>If you adjust the zoom after calibration, you have two main options:</p>
<ul>
<li><p><strong>Dynamic Calibration</strong>: Recalibrate the camera every time you change the zoom. This approach provides the highest accuracy but may be impractical for real-time applications due to computational costs.</p>
</li>
<li><p><strong>.Parameter Interpolation</strong>: If you've calibrated the camera at multiple zoom levels, you can interpolate the intrinsic parameters for new zoom settings. This is computationally efficient but might sacrifice some accuracy.</p>
</li>
</ul>
<p>Understanding intrinsic parameters is key for various computer vision tasks. For instance, in augmented reality, an accurate intrinsic matrix can drastically improve the realism and alignment of virtual objects in real-world scenes.</p>
<h4 id="heading-extrinsic-parameters">Extrinsic Parameters</h4>
<p>While intrinsic parameters define a camera's 'personality' by capturing its internal characteristics, extrinsic parameters tell the 'story' of the camera's interaction with the external world. These parameters, specifically the rotation matrix <strong><em>R</em></strong> and the translation vector <strong><em>T</em></strong>, are indispensable for mapping points from the camera's 2D image plane back to their original 3D coordinates in the world. This becomes particularly vital in scenarios involving multiple cameras or moving cameras, such as in robotics or autonomous vehicles. By accurately determining these extrinsic parameters, one can achieve high-precision tasks like 3D reconstruction and multi-camera scene analysis.</p>
<ol>
<li><p><strong>Rotation Matrix (<em>R</em>):</strong> This <strong><em>3x3</em></strong> matrix gives us the orientation of the camera in the world coordinate system. Specifically, it transforms coordinates from the world frame to the camera frame. For instance, if a drone equipped with a camera needs to align itself to capture a specific scene, the rotation matrix helps in determining the orientation the drone must assume.</p>
<p> The rotation matrix is usually denoted as <strong><em>R</em></strong> and takes the form:</p>
</li>
</ol>
<p>$$R = \begin{pmatrix} r_{11} &amp; r_{12} &amp; r_{13} \\ r_{21} &amp; r_{22} &amp; r_{23} \\ r_{31} &amp; r_{32} &amp; r_{33} \end{pmatrix}$$</p><p>The elements <strong>r<sub>11</sub></strong>, <strong>r<sub>12</sub></strong>, and <strong>r<sub>13</sub></strong>, .... <strong>r<sub>33</sub></strong> define the camera's orientation relative to the world's coordinate system. Each column of <strong><em>R</em></strong> essentially represents the unit vectors along the camera's local x, y, and z axes, but expressed in terms of the world coordinate system. For example, <strong>r<sub>11</sub></strong>, <strong>r<sub>21</sub></strong>, and <strong>r<sub>31</sub></strong> describe how much the world's x-component aligns with the camera's local x-axis.</p>
<ol>
<li><strong>Translation Vector (T):</strong> This <strong><em>3x1</em></strong> vector represents the position of the camera's optical centre in the world coordinate system. The translation vector is generally represented as:</li>
</ol>
<p>$$T = \begin{pmatrix} t_x \\ t_y \\ t_z \end{pmatrix}$$</p><p>The elements <strong>t<sub>x</sub></strong>, <strong>t<sub>y</sub></strong>, and <strong>t<sub>z</sub></strong> in the translation vector represents the position of the camera's optical centre in the world coordinate system. For instance, <strong>t<sub>x</sub></strong> is the distance from the world origin to the camera's optical centre along the world's x-axis, while <strong>t<sub>y</sub></strong> and <strong>t<sub>z</sub></strong> serve the same purpose along the y and z axes, respectively.</p>
<p>Computing R and T gives you a complete picture of the camera's pose in the world, including both orientation and position.</p>
<p>Together, the rotation matrix and the translation vector can be combined into a single <strong><em>3x4</em></strong> matrix, often represented as <strong><em>[R|T]</em></strong>:</p>
<p>$$[R|T] = \begin{pmatrix} r_{11} &amp; r_{12} &amp; r_{13} &amp; t_x \\ r_{21} &amp; r_{22} &amp; r_{23} &amp; t_y \\ r_{31} &amp; r_{32} &amp; r_{33} &amp; t_z \end{pmatrix}$$</p><h3 id="heading-conclusion">Conclusion</h3>
<p>We've covered a lot of ground in this first instalment of our series on camera calibration, unravelling the complexities behind camera models and the intrinsic and extrinsic parameters that define them. These foundational concepts are the building blocks for more advanced topics like distortion correction, 3D reconstruction, and multi-camera setups. In the next part of this series, we'll go beyond the basics to explore the practical reasons for camera calibration, the types of distortions you might encounter, and the mathematical and technical approaches to correct them. So, stay tuned for more insights into the fascinating world of camera calibration!</p>
<h3 id="heading-references">References</h3>
<p>For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:</p>
<p>[1] Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman</p>
<p>[2] <a target="_blank" href="https://web.stanford.edu/class/cs231a/course_notes/01-camera-models.pdf">Stanford CS231A: Camera Models</a></p>
<h3 id="heading-further-reading">Further Reading</h3>
<p>For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:</p>
<ol>
<li><p><strong>Books:</strong></p>
<ul>
<li><p>Digital Image Warping by George Wolberg</p>
</li>
<li><p>Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman</p>
</li>
<li><p>Computer Vision: Algorithms and Applications by Richard Szeliski</p>
</li>
<li><p>3D Computer Vision: Efficient Methods and Applications by Christian Wöhler</p>
</li>
</ul>
</li>
<li><p><strong>Papers:</strong></p>
<ul>
<li><p>A Four-step Camera Calibration Procedure with Implicit Image Correction by Jean-Yves Bouguet</p>
</li>
<li><p>Flexible Camera Calibration By Viewing a Plane From Unknown Orientations by Zhengyou Zhang</p>
</li>
</ul>
</li>
</ol>
<p>By exploring these resources, you'll gain a more comprehensive understanding of camera calibration, enabling you to tackle more complex problems and applications.</p>
]]></content:encoded></item><item><title><![CDATA[Iba: When Words Fail, Music Speaks of the Divine]]></title><description><![CDATA[The concept of divine greatness transcends human understanding. While I am in deep search of knowledge even to the height of academic allure I have come to find solace and awe in contemplating the unfathomable. Even in both scientific research and sp...]]></description><link>https://samueladebayo.com/iba-when-words-fail-music-speaks-of-the-divine</link><guid isPermaLink="true">https://samueladebayo.com/iba-when-words-fail-music-speaks-of-the-divine</guid><category><![CDATA[religious]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Thu, 21 Sep 2023 15:48:34 GMT</pubDate><content:encoded><![CDATA[<p>The concept of divine greatness transcends human understanding. While I am in deep search of knowledge even to the height of academic allure I have come to find solace and awe in contemplating the unfathomable. Even in both scientific research and spiritual contemplation, we should be reminded that there are realms of understanding that go beyond what we can readily grasp. It's a reminder of the limitations of human cognition.</p>
<p>The song "IBA", by Pastor Nathenial Bassey, Dusin Oyekan, Dasola Akinbule is deeply inspiring and yet again I am brought to the realisation of how glorious you are lord, you are graciously wonderful, there is not a thought on earth nor in heaven that can fathom how great you are. No thought, whether terrestrial or celestial, can adequately encapsulate your grandeur. In the face of this, I find myself humbled, whispering "IBA, atofarati" in reverent submission.</p>
<p>Musical compositions like "IBA" possess a remarkable ability to crystallize complex emotions and thoughts. They serve both as an articulation of, and a channel for, our ineffable sense of awe and reverence.</p>
<p>I recognise your unsearchable greatness, Ahayah!</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Principal Component Analysis (PCA): A Comprehensive Guide]]></title><description><![CDATA[Introduction
Imagine you're a wine connoisseur with a penchant for data. You've collected a vast dataset that includes variables like acidity, sugar content, and alcohol level for hundreds of wine samples. You're interested in distinguishing wines ba...]]></description><link>https://samueladebayo.com/understanding-principal-component-analysis-pca-a-comprehensive-guide</link><guid isPermaLink="true">https://samueladebayo.com/understanding-principal-component-analysis-pca-a-comprehensive-guide</guid><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Sun, 03 Sep 2023 05:42:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693719713494/5441a385-e68f-4ea1-a21e-c1852a82f4d3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Imagine you're a wine connoisseur with a penchant for data. You've collected a vast dataset that includes variables like acidity, sugar content, and alcohol level for hundreds of wine samples. You're interested in distinguishing wines based on these characteristics, but you soon realize that visualizing and analyzing multi-dimensional data is like trying to taste a wine from a sealed bottle—near impossible.</p>
<p>This is where the magic of Principal Component Analysis, or PCA for short, kicks in. Think of PCA as your data's personal stylist, helping your dataset shed unnecessary dimensions while keeping its essence intact. Whether you're dissecting the nuances of wine characteristics or diving into the depths of machine learning algorithms, PCA is your go-to for simplifying things without losing the crux of the data.</p>
<h2 id="heading-a-deep-dive-into-the-mathematics-of-pca">A Deep Dive into the Mathematics of PCA</h2>
<h3 id="heading-step-1-the-covariance-matrix">Step 1 : The Covariance Matrix</h3>
<p>Let's assume you given a 2D dataset <strong><em>X</em></strong> of size <strong><em>(n×2)</em></strong> (where ( n ) is the number of samples. Each row in X represents a data point in 2D space, with the first column representing the x-coordinates and the second coordinates representing the y-coordinates. The first step in <strong>PCA</strong> is to calculate its covariance matrix <strong><em>∑</em></strong>:</p>
<p>$$\Sigma = \frac{1}{n} \sum_{i=1}^{n} (x_i - \mu)(x_i - \mu)^T$$</p><p>Here x<sub>ij</sub> represents the i<sup>th </sup> row in <strong><em>X</em></strong> (a 2D point), and <strong><em>μ</em></strong> is the mean vector of the dataset. The term <strong><em>(x<sub>i </sub> - μ)</em></strong> represents the deviation of each point from the mean, and <strong><em>(x<sub>i </sub> - μ)<sup>T</sup></em></strong> is its transpose.</p>
<h3 id="heading-step-2-eigen-decomposition">Step 2: Eigen Decomposition</h3>
<p>After calculating <strong><em>∑</em></strong> Next, we perform eigen-decomposition of the covariance matrix. This allows for finding its eigenvalues and eigenvectors. The eigen decomposition of <strong><em>∑</em></strong> can be represented as:</p>
<p>$$\Sigma = Q \Lambda Q^{-1}$$</p><p>Here <strong><em>Q</em></strong> is a matrix where each column is an eigenvector of <strong><em>∑</em></strong> and <strong><em>Λ</em></strong> is a diagonal matrix containing the eigenvalues <strong><em>λ<sub>1,</sub> λ<sub>2, </sub> ..... λ<sub>d </sub></em></strong> in descending order.</p>
<h3 id="heading-step-3-principal-components-and-dimensionality-reduction">Step 3: Principal Components and Dimensionality Reduction</h3>
<p>Let's say you have now found k eigenvectors (principal components) that you would like to use for dimensionality reduction. These <strong><em>k</em></strong> eigenvectors form a <strong><em>2 x k</em></strong> matrix <strong><em>P</em></strong>.</p>
<p>The projected data <strong><em>Y</em></strong> , in the new <em>k-dimensional</em> space can be calculated as:</p>
<p>$$Y = X \cdot P$$</p><p>In this equation, <strong><em>X</em></strong> is the original <strong><em>n x 2</em></strong> dataset, and <strong><em>P</em></strong> is the <strong><em>2 x k</em></strong> matrix of principal components. The resulting <strong><em>Y</em></strong> will be of size <strong><em>n x k</em></strong>, effectively reducing the dimensionality of each data point from 2D to <strong><em>k-D</em></strong>.</p>
<h2 id="heading-implementing-pca-with-python">Implementing PCA with Python</h2>
<p>Here's a Python code snippet to get you started with PCA:</p>
<ol>
<li><p><strong>Data Generation:</strong> First let's generate some synthetic data with 100 samples in a 2D feature space between x and y coordinates. This sort of mimics the real-world data where features are often correlated.</p>
<pre><code class="lang-python"> <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
 <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt

 <span class="hljs-comment"># Generate synthetic 2D data</span>
 np.random.seed(<span class="hljs-number">0</span>)
 x = np.random.normal(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-number">100</span>)  <span class="hljs-comment"># x-coordinates</span>
 y = <span class="hljs-number">2</span> * x + np.random.normal(<span class="hljs-number">0</span>, <span class="hljs-number">5</span>, <span class="hljs-number">100</span>)  <span class="hljs-comment"># y-coordinates</span>
 data = np.column_stack((x, y))
</code></pre>
</li>
<li><p><strong>Data Visualisation:</strong> Let's visualise what our generated data looks like.</p>
</li>
</ol>
<pre><code class="lang-python"><span class="hljs-comment"># Plot the synthetic data</span>
plt.figure(figsize=(<span class="hljs-number">8</span>, <span class="hljs-number">6</span>))
plt.scatter(data[:, <span class="hljs-number">0</span>], data[:, <span class="hljs-number">1</span>], label=<span class="hljs-string">'Original Data'</span>)
plt.xlabel(<span class="hljs-string">'X'</span>)
plt.ylabel(<span class="hljs-string">'Y'</span>)
plt.title(<span class="hljs-string">'Synthetic 2D Data'</span>)
plt.grid(<span class="hljs-literal">True</span>)
plt.legend()
plt.show()
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693714113910/c9331999-7d99-4a0d-ab7d-64c574fc129f.png" alt class="image--center mx-auto" /></p>
<ol>
<li><strong>Data Centering:</strong> Before you can apply PCA, it is essential to center the data around the origin. This ensures that the first principal component describes the direction for maximum variance.</li>
</ol>
<pre><code class="lang-python"><span class="hljs-comment"># Calculate the mean of the data</span>
mean_data = np.mean(data, axis=<span class="hljs-number">0</span>)

<span class="hljs-comment"># Center the data by subtracting the mean</span>
centered_data = data - mean_data
</code></pre>
<ol>
<li><strong>The covariance Matrix calculation:</strong> The covariance matrix captures the internal structure of the data. It is the basis for identifying the principal components.</li>
</ol>
<pre><code class="lang-python"><span class="hljs-comment"># Calculate the covariance matrix</span>
cov_matrix = np.cov(centered_data, rowvar=<span class="hljs-literal">False</span>)

print(<span class="hljs-string">'Covariance of Data'</span>,cov_matrix)
</code></pre>
<p>$$\text{Covariance Matrix} = \begin{pmatrix} 102.61 &amp; 211.10 \\ 211.10 &amp; 461.01 \end{pmatrix}$$</p><p>Code output:</p>
<pre><code class="lang-python">Covariance of Data [[<span class="hljs-number">102.60874942</span> <span class="hljs-number">211.10203024</span>]
                     [<span class="hljs-number">211.10203024</span> <span class="hljs-number">461.00685553</span>]]
</code></pre>
<ol>
<li><strong>Eigen Decomposition:</strong> Here we calculate the eigenvalues and eigenvectors of the covariance matrix. The eigenvectors point in the direction of maximum variance, and the eigenvalues indicate the magnitude of this variance - since the first principal component is the eigenvector associated with the largest eigenvalue of the data's covariance matrix. This eigenvector identifies the direction along which the dataset varies the most.</li>
</ol>
<pre><code class="lang-python"><span class="hljs-comment"># Calculate the eigenvalues and eigenvectors of the covariance matrix</span>
eig_values, eig_vectors = np.linalg.eig(cov_matrix)

print(<span class="hljs-string">'Eigenvalues:'</span>, eig_values, <span class="hljs-string">'\n'</span>, <span class="hljs-string">'Eigenvectors: '</span>, eig_vectors)
</code></pre>
<p>$$\text{Eigenvalues} = \left[ 4.90, 558.71 \right]$$</p><p>$$\text{Eigenvectors} = \begin{pmatrix} -0.91 &amp; -0.42 \\ 0.42 &amp; -0.91 \end{pmatrix}$$</p><ol>
<li><strong>Projection and Visualization:</strong> Our data is then projected onto the principal component. The original data with the principal component, and the projected data are then plotted together to further emphasize the dimensionality reduction.</li>
</ol>
<pre><code class="lang-python"><span class="hljs-comment"># Choose the eigenvector corresponding to the largest eigenvalue (Principal Component)</span>
principal_component = eig_vectors[:, np.argmax(eig_values)]

<span class="hljs-comment"># Project data onto the principal component</span>
projected_data = np.dot(centered_data, principal_component)

<span class="hljs-comment"># Re-plot the original data and its projection with the principal component as a red arrow</span>

<span class="hljs-comment"># Plot the original data and its projection</span>
plt.figure(figsize=(<span class="hljs-number">10</span>, <span class="hljs-number">8</span>))
plt.scatter(data[:, <span class="hljs-number">0</span>], data[:, <span class="hljs-number">1</span>], alpha=<span class="hljs-number">0.5</span>, label=<span class="hljs-string">'Original Data'</span>)

<span class="hljs-comment"># Draw the principal component as a red arrow</span>
plt.arrow(mean_data[<span class="hljs-number">0</span>], mean_data[<span class="hljs-number">1</span>], principal_component[<span class="hljs-number">0</span>]*<span class="hljs-number">20</span>, principal_component[<span class="hljs-number">1</span>]*<span class="hljs-number">20</span>,
          head_width=<span class="hljs-number">2</span>, head_length=<span class="hljs-number">2</span>, fc=<span class="hljs-string">'r'</span>, ec=<span class="hljs-string">'r'</span>, label=<span class="hljs-string">'Principal Component'</span>)

<span class="hljs-comment"># Plot the projected data as green points</span>
plt.scatter(mean_data[<span class="hljs-number">0</span>] + projected_data * principal_component[<span class="hljs-number">0</span>],
            mean_data[<span class="hljs-number">1</span>] + projected_data * principal_component[<span class="hljs-number">1</span>],
            alpha=<span class="hljs-number">0.5</span>, color=<span class="hljs-string">'g'</span>, label=<span class="hljs-string">'Projected Data'</span>)

plt.xlabel(<span class="hljs-string">'X'</span>)
plt.ylabel(<span class="hljs-string">'Y'</span>)
plt.title(<span class="hljs-string">'Data and Principal Component'</span>)
plt.grid(<span class="hljs-literal">True</span>)
plt.legend()
plt.show()
</code></pre>
<p>Output:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693715195688/d60a373a-f67c-49da-a0bf-8c094994a4d9.png" alt class="image--center mx-auto" /></p>
<p>There we go—the red arrow representing the principal component is now visible in the plot, along with the original data points and their projections (in green). The arrow points in the direction of the highest variance in the dataset, capturing the essence of the data in fewer dimensions.</p>
<h4 id="heading-why-does-the-pca-point-in-downwards-left">Why does the PCA point in Downwards left?</h4>
<p>You might have noticed that the red arrow, our principal component, points towards the bottom left. IS this supposed to happen? Absolutely, and here is why:</p>
<p>The direction of the principal component is calculated mathematically to capture the maximum variance in the synthetic dataset. This direction is defined by the eigenvector corresponding to the largest eigenvalue of the covariance.</p>
<p>Simply put, the principal components serve as a "line of best fit" for the multidimensional data It doesn't necessarily mean an alignment with the <code>x</code> and <code>y</code> axis but it captures the correlation between these dimensions. In this specific synthetic dataset, the principal component points towards the bottom left, indicating that as one variable decreases, the other tends to decrease as well, and vice-versa.</p>
<p>This is a crucial insight because it tells us not just about the spread of each variable but also about their relationship with each other. So, yes, the direction of the principal component is both intentional and informative.</p>
<p>In case you would like to run the full code use the replit window below:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://replit.com/@SamuelAdebayo4/Vanilla-PCA#main.py">https://replit.com/@SamuelAdebayo4/Vanilla-PCA#main.py</a></div>
<p> </p>
<h2 id="heading-real-world-applications">Real-world Applications</h2>
<h3 id="heading-in-the-glass-wine-quality-estimation">In the Glass: Wine Quality Estimation</h3>
<p>Let's circle back to our wine example. You could use PCA to distinguish wines based on key characteristics. By reducing the dimensions, you can visualize clusters of similar wines and maybe even discover the perfect bottle for your next dinner party!</p>
<h3 id="heading-beyond-the-bottle-other-fields">Beyond the Bottle: Other Fields</h3>
<ol>
<li><p><strong>Data Visualization</strong>: High-dimensional biological data, stock market trends, etc.</p>
</li>
<li><p><strong>Noise Reduction</strong>: Image processing and audio signal processing.</p>
</li>
<li><p><strong>Natural Language Processing</strong>: Feature extraction from text data.</p>
</li>
</ol>
<h2 id="heading-future-directions">Future Directions</h2>
<ol>
<li><p><strong>Kernel PCA</strong>: For when linear PCA isn't enough.</p>
</li>
<li><p><strong>Sparse PCA</strong>: When you need a sparse representation.</p>
</li>
<li><p><strong>Integrating with Deep Learning</strong>: Using PCA for better initialization of neural networks.</p>
</li>
</ol>
<h2 id="heading-further-reading">Further Reading</h2>
<p>For those who wish to delve deeper into PCA, here are some textbook references:</p>
<ol>
<li><p>"Pattern Recognition and Machine Learning" by Christopher M. Bishop</p>
</li>
<li><p>"The Elements of Statistical Learning" by Trevor Hastie, Robert Tibshirani, and Jerome Friedman</p>
</li>
<li><p>"Machine Learning: A Probabilistic Perspective" by Kevin P. Murphy</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In the realm of data science, PCA ages like a well-kept Bordeaux—it only gets richer and more valuable as you delve deeper. This versatile approach is more than just a mathematical trick; it's a lens that brings clarity to your analytical endeavors. So whether you're a wine lover seeking the perfect blend, a data scientist sifting through gigabytes, or a machine learning guru, mastering PCA is like adding a Swiss Army knife to your data analysis toolkit.</p>
]]></content:encoded></item><item><title><![CDATA[August 22nd]]></title><description><![CDATA["What is man, that thou art mindful of him? and the son of man, that thou visitest him? For thou hast made him a little lower than the angels, and hast crowned him with glory and honour. Thou madest him to have dominion over the works of thy hands; t...]]></description><link>https://samueladebayo.com/august-22nd</link><guid isPermaLink="true">https://samueladebayo.com/august-22nd</guid><category><![CDATA[Bible ]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Wed, 23 Aug 2023 14:24:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1692800515061/53c1b234-f349-4933-82f5-e5cd4383f9c7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>"What is man, that thou art mindful of him? and the son of man, that thou visitest him? For thou hast made him a little lower than the angels, and hast crowned him with glory and honour. Thou madest him to have dominion over the works of thy hands; thou hast put all things under his feet: All sheep and oxen, yea, and the beasts of the field; The fowl of the air, and the fish of the sea, and whatsoever passeth through the paths of the seas. O Lord our Lord, how excellent is thy name in all the earth!" (Psalm 8:4-9)</p>
<p>This verse of the scripture reminds me of the inherent grace and beauty found in our relationship with the Creator. The imagery of being crowned amidst majestic mountains and vast seas brings forth feelings of awe, wonder, and profound gratitude. We are a unique creation, positioned just a little lower than the heavenly beings, yet bestowed with honour and glory. I'm thankful for this reminder of our divine connection, where the natural world stands as a testament to the Creator's grand design.</p>
<p>The mountains and seas serve not merely as a scenic backdrop but as a profound metaphor for our existence, filled with purpose and meaning. They inspire a sense of humility, yet simultaneously elevate our understanding of our special place in this magnificent creation. I am deeply grateful for the realization that I am part of this intricate and purposeful design, created with intention and love. It's a thought that fills me with thankfulness and inspires me to live a life that reflects this connection. How majestic indeed is His name in all the earth, and how profound is our connection to it all!</p>
<p>Thank you Ahayah!</p>
]]></content:encoded></item><item><title><![CDATA[The Egalitarian Conundrum: A Meritocratic Journey Amid Equality, Equity, and Sardonic Revelations]]></title><description><![CDATA[Let's embark on an extended journey into the maze of my personal narrative that ties closely with our earlier philosophical debate, as discussed in 'About Equality and Equity'. Today, I endeavour to weave an intricate tapestry that seamlessly merges ...]]></description><link>https://samueladebayo.com/the-egalitarian-conundrum-a-meritocratic-journey-amid-equality-equity-and-sardonic-revelations</link><guid isPermaLink="true">https://samueladebayo.com/the-egalitarian-conundrum-a-meritocratic-journey-amid-equality-equity-and-sardonic-revelations</guid><category><![CDATA[Equality]]></category><category><![CDATA[Nigeria]]></category><category><![CDATA[meritocracy]]></category><category><![CDATA[Equality of Opportunity]]></category><category><![CDATA[Equality of Outcomes]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Mon, 12 Jun 2023 00:29:49 GMT</pubDate><content:encoded><![CDATA[<p>Let's embark on an extended journey into the maze of my personal narrative that ties closely with our earlier philosophical debate, as discussed in '<a target="_blank" href="https://samueladebayo.com/about-equality-and-equity">About Equality and Equity</a>'. Today, I endeavour to weave an intricate tapestry that seamlessly merges personal experience, philosophical thought, and a subtle sprinkle of sarcasm. Together, we'll explore the dynamic interplay of 'Equality of Opportunity,' not 'Equality of Outcomes,' within the context of a merit-based society.</p>
<p>The canvas of my early years was set in the culturally diverse landscape of Nigeria. The socio-economic environment of my childhood was marked by austerity and a sense of frugality. My parents, resolute in their vision for their child, were fervent believers in the transformative power of education. They nurtured within me a dream that outstretched the limited purview of our financial means. Years of relentless sacrifice and unwavering determination led my parents to provide me with an invaluable opportunity - a quality education. This served as an egalitarian launchpad where I, along with my peers from various economic strata, could test our mettle. Our schools, the magnificent fortresses of knowledge, emerged as arenas where economic disparity was beautifully blurred into oblivion. We all found ourselves on the same starting line, gearing up for the race of life.</p>
<p>Now, for the biting twist of irony. While we were all equipped with the same opportunity, the outcome was a completely different story. Consider this: We are all given a violin and a piece of music. While the violin and music are the same, the symphony that each individual produces varies drastically. Some, with practice, could create a melody that moves hearts; others might only manage a cacophony. What a splendid testimony to the sardonic wit of life's realities!</p>
<p>With this crystal-clear reality, I found myself standing at life's crossroads. I had two options – to channel my energy into unyielding hard work and diligence, crafting a narrative of success, or to let my circumstances dictate my future, whiling away my life on the sidelines. The melody of meritocracy resonated with me. It echoed the profound truth that the world values individuals not by their ancestral wealth but by the strength of their efforts.</p>
<p>At this juncture, allow me to invite sarcasm back onto our stage. Picture a world where, regardless of effort, skill, or prowess, everyone reaches the finish line simultaneously. Consider an academic setting where the diligent scholar and the habitual procrastinator are both rewarded with the same grade. They call it equality; I call it a comedic tragedy!</p>
<p>From a little child to a person carving out their destiny, my narrative was an intense adventure. Given an extraordinary opportunity, I could have taken any path. But I chose the road less travelled. I decided to rise above my circumstances and use my opportunity to craft a trajectory of success. This narrative was never about equal outcomes, but about a race where the winner wasn't preordained. The medals weren't bestowed freely; they were meticulously earned, each gleaming symbol a testament to the sweat of hard work and the unfaltering spirit of meritocracy.</p>
<p>As my journey progressed, I found myself traversing a path littered with challenges and obstacles. Each hurdle, however, was an opportunity in disguise, a chance to prove my worth, test my resolve, and learn valuable lessons. Hours transformed into days, and days into years as I relentlessly pursued excellence, often at the expense of social gatherings and leisure. The culmination of these years of toil and perseverance resulted in a journey defined by meritocratic success.</p>
<p>Taking a step back, the larger narrative unveils itself, posing a series of philosophical questions. What is the true essence of equality in our society? Is it merely about presenting equal opportunities, or does it extend to ensuring equal outcomes? In our quest for equality, where do we demarcate the boundary between rewarding merit and fostering mediocrity?</p>
<p>To answer these questions, we revisit the essence of 'About Equality and Equity.' The narrative of my life echoes the sentiment that creating equal opportunities forms the cornerstone of a just society. The outcomes, however, shouldn't be identical trophies, but a reflection of our individual efforts, our steadfast determination, and our merits.</p>
<p>As we conclude this philosophical exploration into the realms of equality, equity, and meritocracy, let's cherish the ironic humor life unfurls before us. Life, in all its sardonic wisdom, offers each of us the opportunity to run our unique race. Amid this grand orchestration of humanity, let's value the distinctiveness of each journey and the varying pathways to success. After all, a world where everyone ends up the same would be dreadfully monotonous, don't you agree?</p>
]]></content:encoded></item><item><title><![CDATA[Day 2 [Blind 75][LeetCode] Maximizing Profit from Buying and Selling Stocks]]></title><description><![CDATA[Introduction
Welcome to Day 2 of the Blind 75 Challenge! Today I will be tackling the problem of finding the maximum profit by buying and selling stock once, a common problem in algorithm interviews and coding competitions. In this blogpost, I will e...]]></description><link>https://samueladebayo.com/blind-75-day2-maximizing-profit-from-buying-and-selling-stocks</link><guid isPermaLink="true">https://samueladebayo.com/blind-75-day2-maximizing-profit-from-buying-and-selling-stocks</guid><category><![CDATA[Python]]></category><category><![CDATA[Python 3]]></category><category><![CDATA[CodingInterview]]></category><category><![CDATA[Technical interview]]></category><category><![CDATA[#Leetcode75]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Thu, 20 Apr 2023 21:39:06 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>Welcome to Day 2 of the Blind 75 Challenge! Today I will be tackling the problem of finding the maximum profit by buying and selling stock once, a common problem in algorithm interviews and coding competitions. In this blogpost, I will explore a simple and efficient algorithm to solve this problem in Python, using only one pass/iteration through the array of stock prices.</p>
<h1 id="heading-problem">Problem</h1>
<p>You are given an array <code>prices</code> where <code>prices[i]</code> is the price of a given stock on the <code>ith</code> day.
You want to maximize your profit by choosing a <em>single day</em> to buy one stock and choosing a <em>different day in the future</em> to sell that stock.
Return the maximum profit you can achieve from this transaction. If you cannot achieve any profit, return 0.</p>
<h1 id="heading-problem-definition-and-explanation">Problem Definition and Explanation</h1>
<p>In this question, we are given an array of stock prices, where each element in the array represents the price in a particular day, and the <code>ith</code> index location of a day of the stock price corresponds the the <code>ith</code> day. We are expected to find a solution that will provide the maximum profit. The maximum profit here is defined as the largest difference between the largest positive number between the selling price and the buying price. As it is we would want to maximize the profit by buying the stock on one day and selling it on a different day in the future. </p>
<p>For example given the array below</p>
<pre><code class="lang-python">[<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">7</span>, <span class="hljs-number">4</span>, <span class="hljs-number">3</span>]
</code></pre>
<p>The maximum profit would be <code>6</code>. Since the minimum price in the array is <code>1</code> and the maximum price is <code>7</code> (which comes at a later day).
Again, the task is to find the solution that gets the maximum profit from the array.</p>
<h1 id="heading-intuition-behind-the-solution">Intuition behind the solution</h1>
<p><strong>Naive solution</strong>
One possible approach for finding the maximum profit by buying and selling stock is to first find the minimum and maximum values in the array and then calculate the difference between them. This can be implemented as follows:</p>
<pre><code class="lang-python">minimum_price = min(input_list)
maximum_[price = max(input_list)
</code></pre>
<p>Then get the maximum profit by finding the difference between the maximum price and minimum price. </p>
<pre><code class="lang-python">maximum_profit = maximum price -minimum price
</code></pre>
<p>While finding the minimum and maximum values in the array and subtracting them to get the maximum profit might work in some cases, it is not a correct solution in all cases. This approach is not always correct as it fails to consider cases where buying the stock on a day preceding the selling day would result in a greater profit.</p>
<p>Consider the following example:</p>
<pre><code class="lang-python">[<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">6</span>, <span class="hljs-number">5</span>, <span class="hljs-number">0</span>, <span class="hljs-number">3</span>]
</code></pre>
<p>If we simply find the minimum value (0) and the maximum value (6), we would get a profit of 6 - 0 = 6, which is incorrect. The correct maximum profit that can be made in this case is 6 - 2 = 4, by buying the stock on day 2 (price 2) and selling it on day 3 (price 6). Since you can only buy on a day preceding the selling day. 
Therefore, finding the minimum and maximum values in the array and subtracting them is not a correct solution for this problem. Instead, we need to use an algorithm that finds the maximum profit that can be made by buying and selling the stock once. </p>
<p><strong>Using One-Pass Algorithm</strong> </p>
<p>To overcome the limitations of the naive approach, a one-pass algorithm can be used. This algorithm processes each element of the data structure only once and keeps track of the minimum price seen so far and the maximum profit that can be made from selling the stock at the current price.</p>
<p>Here are the steps for implementing the one-pass algorithm:</p>
<ol>
<li>First check if the list is empty. If empty return 0 as maximum profit.<pre><code class="lang-python"><span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> prices: 
 <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>
</code></pre>
</li>
<li><p>Initialize the minimum price to the first element in the array.</p>
<pre><code class="lang-python">maximum_profit = <span class="hljs-number">0</span>
</code></pre>
</li>
<li><p>Traverse through the array.</p>
<pre><code class="lang-python"><span class="hljs-keyword">for</span> price <span class="hljs-keyword">in</span> input_list:
</code></pre>
</li>
<li><p>Check if the current price is lower than the minimum price.</p>
<pre><code class="lang-python"> <span class="hljs-keyword">if</span> price &lt; minimum_price:
</code></pre>
</li>
<li><p>If it is, update the minimum price (since no profit can be made from a lower price.</p>
<pre><code class="lang-python">   minimum_price = price
</code></pre>
</li>
<li><p>Else calculate the profit that can be made by selling the stock at the current price. This is the difference between the current price and the minimum price so far.</p>
<pre><code class="lang-python">        <span class="hljs-keyword">else</span>:
               profit = price - minimum_price
</code></pre>
</li>
<li><p>Finally, compare the current profit with the maximum profit seen so far and update the profit if the current profit is greater.</p>
<pre><code class="lang-python">         <span class="hljs-keyword">if</span> profit &gt; maximum_profit_seen:
             maximum_profit_seen = profit
</code></pre>
</li>
<li><p>Return the maximum profit obtained.
```python
return maximum_profit_seen</p>
</li>
</ol>
<p>Using this algorithm, we can find the maximum profit that can be made by buying and selling the stock once, taking into account the constraint that the buying day must precede the selling day.</p>
<h1 id="heading-putting-it-altogether-code">Putting it altogether - Code</h1>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">maximum_profit_buy</span>(<span class="hljs-params">input_list: list</span>):</span>
    <span class="hljs-comment"># Check if the input list is empty</span>
    <span class="hljs-keyword">if</span> len (input_list) == <span class="hljs-number">0</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>

    <span class="hljs-comment"># Initialize the minimum price and maximum profit seen so far</span>
    minimum_price = input_list[<span class="hljs-number">0</span>]
    maximum_profit_seen = <span class="hljs-number">0</span>

    <span class="hljs-comment"># Traverse through the input list</span>
    <span class="hljs-keyword">for</span> price <span class="hljs-keyword">in</span> input_list:
        <span class="hljs-comment"># Update the minimum price seen so far</span>
        <span class="hljs-keyword">if</span> price &lt; minimum_price:
            minimum_price = price
        <span class="hljs-keyword">else</span>:
            <span class="hljs-comment"># Calculate the profit that can be made by selling at the current price</span>
            profit = price - minimum_price
            <span class="hljs-comment"># Update the maximum profit seen so far if the current profit is greater</span>
            <span class="hljs-keyword">if</span> profit &gt; maximum_profit_seen:
                maximum_profit_seen = profit

    <span class="hljs-comment"># Return the maximum profit seen so far</span>
    <span class="hljs-keyword">return</span> maximum_profit_seen
</code></pre>
<h1 id="heading-testing">Testing</h1>
<p>Let's test the <code>maximum_profit_buy</code> function:</p>
<pre><code class="lang-python">print(maximum_profit_buy([<span class="hljs-number">7</span>,<span class="hljs-number">6</span>,<span class="hljs-number">4</span>,<span class="hljs-number">3</span>,<span class="hljs-number">1</span>])) <span class="hljs-comment"># Expected output: 0</span>
<span class="hljs-keyword">print</span> (maximum_profit_buy ([<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">7</span>, <span class="hljs-number">4</span>, <span class="hljs-number">3</span>])) <span class="hljs-comment"># Expected Output 6</span>
</code></pre>
<p>The First test case represents the array <code>[7,6,4,3,1]</code>, where the stock price decreases every day. In this case, no profit can be made, so the expected output is <code>0</code>. For the second test case, we have an array <code>[1, 2, 3, 7, 4, 3]</code> and the maximum profit that can be made by buying stock on <code>day 1</code> <code>price 1</code> is and selling it on <code>day 4</code> <code>price 7</code> is 6 which is the expected output.</p>
<h1 id="heading-time-and-space-complexity">Time and Space Complexity</h1>
<p>The function has a time complexity of O(n), where n is the length if the input array. This is so since we need to iterate through the array only once. The space complexity is O(1) since we only use a constant amount of extra space to store the minimum price seen so far and the maximum profit.</p>
<h1 id="heading-use-cases">Use cases</h1>
<p>The problem of finding the maximum profit by buying and selling a stock once is a common problem in coding interviews and competitions. It can also be used in finance and economics to analyze the performance of stocks and investments.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>In this blog post, we explored a simple and efficient algorithm to solve the problem of finding the maximum profit that can be made by buying and selling a stock once. By using the one-pass approach and keeping track of the minimum price seen so far and the maximum profit that can be made by selling the stock at the current price, we can solve this problem in O(n) time complexity, where n is the length of the input array.</p>
]]></content:encoded></item><item><title><![CDATA[Day 1 [Blind 75][LeetCode] Two Sum Problem: Using Hash Tables to Find Pairs of Integers That Add Up to a Target Value]]></title><description><![CDATA[Problem
Given an array of integers num  and an integer target, return indices of the two numbers such that they add up to target.
You may assume that each input would have exactly one solution and you may not use the same element twice.
You can assum...]]></description><link>https://samueladebayo.com/blind75-day1-two-sum-python</link><guid isPermaLink="true">https://samueladebayo.com/blind75-day1-two-sum-python</guid><category><![CDATA[Python]]></category><category><![CDATA[Python 3]]></category><category><![CDATA[Technical interview]]></category><category><![CDATA[leetcode]]></category><category><![CDATA[leetcode-solution]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Wed, 19 Apr 2023 18:25:38 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-problem">Problem</h1>
<p>Given an array of integers <code>num</code>  and an integer <code>target</code>, return indices of the two numbers such that they add up to <code>target</code>.
You may assume that each input would have exactly one solution and you may not use the same element twice.
You can assume that the given input array is not sorted.</p>
<h1 id="heading-problem-definition-and-explanation">Problem definition and explanation.</h1>
<p>The two-sum problem as it is widely called is a classic coding challenge that requires finding two integers in a given list that add up to a target value. The problem is often presented in different technical contexts, for example in algorithmic design, data structures, and optimization or even in the form of interview questions for most software engineering positions.</p>
<p>Now, to the main thing, this problem requires us to find two integers provided they are present in the given list that add up to a given target value. So for example if given <code>example_list</code>  and a <code>target_int</code> below: </p>
<pre><code class="lang-python">example_list = [<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>, <span class="hljs-number">9</span>]
target_int = <span class="hljs-number">9</span>
</code></pre>
<p>You would be expected to come up with a code that returns the index location of  <code>3</code> and <code>6</code> since these are the integers that add up to the target integer, such that your return value is: 
<code>[1, 2]</code></p>
<h1 id="heading-intuition-behind-the-solution">Intuition behind the solution</h1>
<p><strong>layman's thought</strong></p>
<p>When I first approached the Two-Sum problem, my initial thought was to find a way to map each number in the input list to its corresponding index location. I realized that this could be achieved by creating a table or dictionary that stores each number as a key and its corresponding index as the value. Such that for the list below:</p>
<pre><code class="lang-python">example_list = [<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>, <span class="hljs-number">9</span>]
</code></pre>
<p>you would have a table similar to the one below:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Elements</td><td>Index Location</td></tr>
</thead>
<tbody>
<tr>
<td>2</td><td>0</td></tr>
<tr>
<td>3</td><td>1</td></tr>
<tr>
<td>6</td><td>2</td></tr>
<tr>
<td>9</td><td>3</td></tr>
</tbody>
</table>
</div><p>Next, I iterated over the input list and for each number, I calculated the difference between that number and the target integer. I then checked if this difference exists in the input list (excluding the current number being checked). If the difference was found in the list, I used the table or dictionary I created earlier to find the index location of the number that makes up the target sum. This gave me the indices of the two numbers that add up to the target value.</p>
<p>In summary, my solution involved creating a table or dictionary that maps each number to its corresponding index location in the input list, and then iterating over the list to find the difference between each number and the target integer. I then used the table or dictionary to find the location of the number that makes up the target sum.</p>
<p><strong>Pythonic thoughts</strong></p>
<p>The table can be presented in Python as a hash table or dictionary data structure that maps each integer in the input list to its corresponding index location. This will enable us to access the index location of any integer in constant time. To do this, I created a dictionary variable that will store the integers as keys and index location as values. In Python the index location and elements can be gotten using the <code>enumerate</code> method. This will return both the index location and element while iterating through a list: </p>
<pre><code class="lang-python">cache = {el: en <span class="hljs-keyword">for</span> en, el <span class="hljs-keyword">in</span> enumerate(input_list)}
</code></pre>
<p>Next, iterate over the input list and for each integer, calculate the difference between that integer and the target(given):</p>
<pre><code class="lang-python">    <span class="hljs-keyword">for</span> en, int_1 <span class="hljs-keyword">in</span> enumerate(input_list):
</code></pre>
<p>Next, check if the difference exists in the input list (excluding the current integer being checked). This is achieved by looking it up in the dictionary. This search operation takes constant time. </p>
<pre><code class="lang-python">            <span class="hljs-keyword">if</span> (target_int - int_1) <span class="hljs-keyword">in</span> input_list :
                    <span class="hljs-keyword">if</span> cache[target_int - int_1] != en:
</code></pre>
<p>If this search operation is successful and the difference is found in the input list, use the dictionary to look up the index location of the integer that makes up the sum. </p>
<pre><code class="lang-python">                            <span class="hljs-keyword">return</span> [cache[int_1], cache[target_int-int_1]]
</code></pre>
<p>If no match is found, i.e. no two integers add up to the target value, we return an empty list. </p>
<pre><code class="lang-python">    <span class="hljs-keyword">return</span> []
</code></pre>
<h1 id="heading-putting-it-altogether-code">Putting it altogether - code</h1>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">two_sum</span>(<span class="hljs-params">input_list: list, target_int:int</span>):</span>
    <span class="hljs-comment"># Create a hash table or dictionary that maps each integer to its index location</span>
    cache = {el:en <span class="hljs-keyword">for</span> en, el <span class="hljs-keyword">in</span> enumerate(input_list)}

    <span class="hljs-comment"># Iterate over the input list and check for the sum of two integers that equals the target value</span>
    <span class="hljs-keyword">for</span> en, int_1 <span class="hljs-keyword">in</span> enumerate(input_list):
        <span class="hljs-keyword">if</span> (target_int - int_1) <span class="hljs-keyword">in</span> input_list:
            <span class="hljs-comment"># Check that the two integers are not the same</span>
            <span class="hljs-keyword">if</span> cache[target_int - int_1] != en:
                <span class="hljs-comment"># Return the indices of the two integers that add up to the target value</span>
                <span class="hljs-keyword">return</span> [cache[int_1], cache[target_int-int_1]]

    <span class="hljs-comment"># Return an empty list if no two integers add up to the target value</span>
    <span class="hljs-keyword">return</span> []
</code></pre>
<h1 id="heading-testing">Testing</h1>
<p>To test if the code works:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Example usage</span>
list_1 = [<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>, <span class="hljs-number">9</span>]
print(two_sum(list_1, <span class="hljs-number">9</span>))  <span class="hljs-comment"># Output: [1, 2]</span>
</code></pre>
<h1 id="heading-time-and-space-complexity">Time and Space Complexity</h1>
<p>The time complexity of this solution is O(n), where n is the length of the input list, and the space complexity is also O(n) since we need to store each integer in the input list as a key in the dictionary.</p>
<h1 id="heading-use-cases">Use cases</h1>
<p>The two-sum problem is a common problem in computer science and is used in many real-world applications. For example, in financial applications, we can use the two-sum problem to find pairs of stocks that add up to a given target value. In image processing, we can use the two-sum problem to find pairs of pixels that add up to a given target colour.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>In this blog post, we discussed the two-sum problem, the intuition behind solving it, and how to solve it using a dictionary/hash table. We saw that this problem has a time complexity of O(n) and a space complexity of O(n). We also discussed some use cases of the two-sum problem in real-world applications.</p>
]]></content:encoded></item><item><title><![CDATA[Reading List for February - July]]></title><description><![CDATA[1. "Beyond Order: 12 More Rules for Life" by Jordan B. Peterson - A re-read for reinforcing my focus and perspectives.
2. "Mao: The Unknown Story" by Jung Chang - A historical exploration of Mao Zedong's life and impact on China.
3. "The Irish Differ...]]></description><link>https://samueladebayo.com/reading-list-for-february-july</link><guid isPermaLink="true">https://samueladebayo.com/reading-list-for-february-july</guid><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Sat, 04 Feb 2023 05:39:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1675489040790/9522c916-04f9-4121-8bce-99f00416e7e4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>1. "<strong>Beyond Order: 12 More Rules for Life</strong>" by Jordan B. Peterson - A re-read for reinforcing my focus and perspectives.</p>
<p>2. "<strong>Mao: The Unknown Story</strong>" by Jung Chang - A historical exploration of Mao Zedong's life and impact on China.</p>
<p>3. "<strong>The Irish Difference: A Tumultuous History of Irish breakup with Britain</strong>" by Fergal Tobin - An in-depth examination of Irish culture, including its historical background and unique characteristics.</p>
<p>4. "<strong>Multiple View Geometry in Computer Vision</strong>" by Richard Hartley and Andrew Zisserman - I have read papers from both authors, fascinated by their works.</p>
<p>5. "<strong>Bayesian Reasoning and Machine Learning</strong>" by David Barber - the future is plagued with uncertainty and so is our physical world. Building an interactive machine for our physical world requires understanding uncertainties and mitigating their ripple effects. An exploration of the integration of Bayesian reasoning and machine learning for modeling uncertain systems and mitigating their potential impact.</p>
]]></content:encoded></item><item><title><![CDATA[About Equality and Equity]]></title><description><![CDATA[The thoughts expressed here are mine and do not in any way represent that of my university, employers, hierarchy, or close associates.  Additionally, I am no expert in this field, it is only from observations and personal experiences that I have draw...]]></description><link>https://samueladebayo.com/about-equality-and-equity</link><guid isPermaLink="true">https://samueladebayo.com/about-equality-and-equity</guid><category><![CDATA[equity]]></category><category><![CDATA[Diversity, Equality, and Inclusion ]]></category><category><![CDATA[DEI]]></category><category><![CDATA[Diversity, Equity, and Inclusion]]></category><category><![CDATA[Equality]]></category><dc:creator><![CDATA[Samuel Adebayo]]></dc:creator><pubDate>Sat, 14 Jan 2023 13:09:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673701422158/9d49156e-15c6-4fde-b480-301e1120914d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The thoughts expressed here are mine and do not in any way represent that of my university, employers, hierarchy, or close associates.  Additionally, I am no expert in this field, it is only from observations and personal experiences that I have drawn my opinions. </p>
<p>For the last few months, I have been bothered by the ideology peddled by most employers of labour.  This very concern has led me to ask a not-so-popular question – am I being approached for employment or opportunities because of the colour of my skin? Perhaps, this question is ‘popular’ howbeit in the minds of the most concerned few.  This rather daunting question, even led to a more stomach-turning one – is this equity or equality If my question turns out to be true? Of course, if False, am I being headhunted because of my intelligence, skills, and ‘diversity’ of my uniqueness? Or is it rather because of prejudice? If True, does this mean I am privileged and profiting from an undeserved opportunity?</p>
<p>As a researcher, when I am faced with a challenging technical problem, especially the ones that leave me tasking for days, I am led to examine the base class.  In object-oriented programming, a base class is a fundamental template or blueprint on which other classes are built.  These newly created classes inherit functionalities, methods, and principles from the base class.  It is also to be noted that new methods, principles, and ‘ideology’ can be created which can override the inherited methods.  Please hold this thought as this will make more sense soon.  Back to my original ponder, the questions I have asked myself for weeks have led me to this one question.  Which is best Equality or Equity? Or succinctly put – Which is the more noble, just, and fair goal- Equity or Equality? While both have inherited the ideas of social justice, fairness of rights and opportunities, – one more than the other, is overriding the very fundamental truth of the base class while claiming to belong to the base class.</p>
<p>As a society, we constantly debate over whether equality or equity is the more desirable goal.  On the surface, the two concepts may appear to be interchangeable, but upon deeper examination, it becomes clear that they represent fundamentally different ways of thinking about the world.</p>
<p>Equality is the absolute ideal that everyone should be treated equally, regardless of background or characteristics.  This is a noble goal and one that is deeply ingrained in our culture.  The idea that all people should be treated with dignity and respect is a fundamental principle of democracy.  However, the problem with this approach is that it assumes that everyone starts from the same place and that the same opportunities are available to everyone. </p>
<p>This is a fallacy.  In fact, there is no one equal person and in truth, people have different starting points and challenges to overcome.  Some individuals may have had a privileged upbringing, while others may have struggled with poverty or discrimination.  Treating everyone the same, without taking these differences into account, can perpetuate inequality, defeating the main purpose of fairness, diversity, and inclusion.</p>
<p>Equity, however, is the idea that everyone should have and be provided/presented with an equal opportunity to succeed.  This means that individuals and groups who have been traditionally marginalized may require additional resources or support to achieve the same level of success as those who have not faced such barriers.</p>
<p>To achieve equity, we must be willing to acknowledge and address the ways in which structural inequalities exist in our society.  This requires us to take a step back and examine the systems and institutions that shape our lives.  We must ask ourselves: Are the playing field and opportunities equal for all individuals? Are certain groups or individuals facing barriers or discrimination that make it harder for them to succeed? It is only by acknowledging these difficult truths and taking steps to address them, that we can truly achieve a society that is fair and just for all.  Equality may be a nice idea, but it is not enough.  We must strive for equity if we are to create a society in which everyone can reach their full potential.</p>
<p>It is however important to note that achieving equity does not mean that everyone will have the same outcome, but rather that everyone will have the same opportunity to succeed.  This means that some individuals may still achieve more success than others, but it will not be due to systemic barriers or discrimination that has constantly plagued our society.  More importantly, equity is not about granting preferential treatment to certain groups or individuals, but rather about levelling the playing field and providing the necessary resources and support to overcome barriers.</p>
<p>Additionally, equity must be seen as an ongoing-continuous process, as society is ever-changing and dynamic- opportunities to address inequalities, challenges, and discrimination will always arise.  In practice, achieving equity may involve a variety of actions, such as implementing policies and practices that promote diversity and inclusion, creating more accessible educational and job training programs, and addressing biases in hiring and promotion practices.</p>
<p>Ultimately, the goal of equity is to create a society in which everyone can reach their full potential, regardless of their background or characteristics.  It's not only morally right but also beneficial for society, as a diverse and inclusive society is more productive and innovative.</p>
<p>In conclusion, while equality is a very noble goal, it is not enough to achieve a truly just and fair society.  Equity is the more desirable goal, as it acknowledges and addresses the structural inequalities that exist in our society and ensures that everyone has an equal opportunity to succeed.  This requires us to be willing to look beyond the surface and examine the systems and institutions that shape our lives.  Only by achieving equity can we create a society in which everyone can thrive.</p>
]]></content:encoded></item></channel></rss>