<![CDATA[Samuel's Blog]]>https://samueladebayo.comRSS for NodeThu, 19 Sep 2024 00:57:06 GMT60<![CDATA[Looming Pandemic of Digital Addiction]]>https://samueladebayo.com/looming-pandemic-of-digital-addictionhttps://samueladebayo.com/looming-pandemic-of-digital-addictionFri, 12 Apr 2024 12:44:56 GMT<![CDATA[<p>In the current age of technology, there is a silent pandemic brewing, one not of biological origin, but of digital dependency. The usefulness of smartphones has woven them into our daily life, rendering them indispensable to the current generation. However, this indispensability comes at a hefty pricea growing epidemic of digital addiction that threatens to engulf society in a way previously unseen.</p><p>The phenomenon of digital addiction is not new, yet its scale and impact are escalating rapidly. Smartphones, with their endless applications, instant connectivity, and the lure of social media, have become a constant companion for many, offering both the illusion of connection and the reality of isolation. This paradox lies at the heart of the issue, where the tool designed to connect us to the world also distances us from it. The psychological ramifications of this addiction are profound. From reduced attention spans and disrupted sleep patterns to heightened anxiety and depression, the effects are pervasive. The constant barrage of notifications and the compulsion to remain continually connected disrupt mental peace and personal relationships, leading to a cycle of dependency that is hard to break.</p><p>The proposal to establish rehabilitation centers for digital addiction might once have seemed far-fetched, yet it is becoming increasingly necessary. These facilities would not merely serve as a retreat from technology but as centers for relearning the art of living. Through counseling, digital detox programs, and the teaching of mindfulness and social skills, individuals can reclaim their autonomy over technology, rather than being ruled by it.</p><p>Addressing this pandemic requires a collective effort. It calls for awareness, education, and proactive measures from all sectors of society. Parents, educators, policymakers, and technology creators must work in tandem to create a balanced digital environment. Teaching digital literacy and fostering environments that encourage face-to-face interactions are crucial steps in this direction.</p><p>As we stand on the precipice of this digital pandemic, it is imperative to recognize and act upon the challenges it presents. The establishment of rehabilitation centers, while a necessary measure, is but a part of the solution. The ultimate goal should be to foster a society where technology serves to enhance human interactions, not replace them. By embedding ethical considerations in the design of technology and promoting a culture that values personal connections, we can mitigate the effects of digital addiction.</p><p>This impending pandemic of digital addiction is a clarion call to reassess our relationship with technology; a reminder that in our quest to connect digitally, we must not sever our ties with the very essence of human experiencereal, tangible interactions. The time to act is now, lest we find ourselves ensnared in the very web we have woven.</p>]]><![CDATA[<p>In the current age of technology, there is a silent pandemic brewing, one not of biological origin, but of digital dependency. The usefulness of smartphones has woven them into our daily life, rendering them indispensable to the current generation. However, this indispensability comes at a hefty pricea growing epidemic of digital addiction that threatens to engulf society in a way previously unseen.</p><p>The phenomenon of digital addiction is not new, yet its scale and impact are escalating rapidly. Smartphones, with their endless applications, instant connectivity, and the lure of social media, have become a constant companion for many, offering both the illusion of connection and the reality of isolation. This paradox lies at the heart of the issue, where the tool designed to connect us to the world also distances us from it. The psychological ramifications of this addiction are profound. From reduced attention spans and disrupted sleep patterns to heightened anxiety and depression, the effects are pervasive. The constant barrage of notifications and the compulsion to remain continually connected disrupt mental peace and personal relationships, leading to a cycle of dependency that is hard to break.</p><p>The proposal to establish rehabilitation centers for digital addiction might once have seemed far-fetched, yet it is becoming increasingly necessary. These facilities would not merely serve as a retreat from technology but as centers for relearning the art of living. Through counseling, digital detox programs, and the teaching of mindfulness and social skills, individuals can reclaim their autonomy over technology, rather than being ruled by it.</p><p>Addressing this pandemic requires a collective effort. It calls for awareness, education, and proactive measures from all sectors of society. Parents, educators, policymakers, and technology creators must work in tandem to create a balanced digital environment. Teaching digital literacy and fostering environments that encourage face-to-face interactions are crucial steps in this direction.</p><p>As we stand on the precipice of this digital pandemic, it is imperative to recognize and act upon the challenges it presents. The establishment of rehabilitation centers, while a necessary measure, is but a part of the solution. The ultimate goal should be to foster a society where technology serves to enhance human interactions, not replace them. By embedding ethical considerations in the design of technology and promoting a culture that values personal connections, we can mitigate the effects of digital addiction.</p><p>This impending pandemic of digital addiction is a clarion call to reassess our relationship with technology; a reminder that in our quest to connect digitally, we must not sever our ties with the very essence of human experiencereal, tangible interactions. The time to act is now, lest we find ourselves ensnared in the very web we have woven.</p>]]><![CDATA[Diving into Dynamic Realms: My Journey from 2D to 3D Convolutional Neural Networks]]>https://samueladebayo.com/3dcnn-introhttps://samueladebayo.com/3dcnn-introTue, 14 Nov 2023 23:06:14 GMT<![CDATA[<p><em>If you would like to dive right into the code</em> <a target="_blank" href="https://github.com/exponentialR/3DCNN"><em>please see here</em></a></p><p>In the ever-evolving landscape of computer vision, the transition from static imagery to the dynamic world of videos marks a significant leap to understanding the dynamicity of our world. As someone who has spent years unravelling the <code>mysteries</code> hidden in static images using 2D Convolutional Neural Networks, I find myself at an exciting juncture in my PhD journey - diving into the spatio-temporal context. The shift from analyzing still frames to understanding the intricate sequences of video data is not just a step forward in complexity, but a step towards the realm brimming with untapped potential and unexplored challenges. My exploration into this domain is driven by a simple yet profound realization- our world is not static. It is a dynamic tapestry where each moment is a continuation of the last, and a stroy unfolding in time.</p><p>In my previous work, 2DCNNs served as a powerful tool, adept at capturing spatial hierarchies and patterns within images, exploring the intricate relationship between pixels, and encoding subtle patterns via edges and corners. However, as I delve into video data, I find myself in need of a more sophisticated ally - one capable of understanding not just the spatial but also the temporal nuances of visual data. This is exactly where 3D Convolutional Neural Networks (3D CNNs) enter the picture.</p><p>My shift to 3D CNNs is more than just an academic interest; it is a journey towards a deeper understanding of how we can enable machines to perceive and interpret the world in its full dynamism, our stochasticity, and uncertainties even in seconds of actions- much like we do. Every video clip is a symphony of motions, emotions, and interactions, with multilayers of subtle meanings- and of course 3D CNNs promise to be the key to deciphering these complex sequences. As I embark on this journey, I am not just to expand the boundaries of my knowledge, but also to contribute to the broader field of computer vision, pushing towards systems that can understand and interact with the world in richer, more meaningful ways.</p><p>In subsequent blog posts, I invite you with me through the explorations of 3DCNNs - from the core concepts that distinguish them from their 2D counterparts to the intricate challenges and learning curves I have encountered while applying them to video data. Whether you are a seasoned expert in the field, a beginner, a grad student, or a curious onlooker, I hope to offer insights and experiences that resonate with this domain.</p><h3 id="heading-background-and-core-concepts"><strong>Background and Core Concepts</strong></h3><h4 id="heading-the-evolution-from-2d-to-3d-cnns">The Evolution from 2D to 3D CNNs</h4><p><strong>Understanding CNNs</strong>: Convolutional Neural Networks (CNNs) have been the cornerstone of image analysis in computer vision for years. Traditional 2D CNNs are adept at processing static imageslearning spatial hierarchies and patterns by applying filters that capture various aspects of the image, such as edges, textures, and shapes. If you would like to find out more about 2D CNN, please refer to my <a target="_blank" href="https://github.com/exponentialR/SamuelAdebayo/tree/main/ML-Slides">slides and labs here</a></p><p><strong>Limitation in Capturing Temporal Information</strong>: While 2D CNNs excel in spatial understanding, they fall short in comprehending temporal dynamics, which is crucial when dealing with video data. Videos are essentially sequences of frames, where each frame is tied to its predecessor and successor, creating a temporal continuity that 2D CNNs cannot capture.</p><h4 id="heading-the-emergence-of-3d-cnns">The Emergence of 3D CNNs</h4><p><strong>Introduction to 3D CNNs</strong>: This is where 3D Convolutional Neural Networks change the game. Unlike their 2D counterparts, 3D CNNs are designed to understand both spatial and temporal features. They achieve this by adding an additional dimensiontimeto the convolutional process.</p><p><strong>How 3D CNNs Work</strong>: In a 3D CNN, the convolutional filters extend along three dimensionsheight, width, and depth (time). This allows the network to not only learn from the spatial content of each frame but also gain insights into the motion and changes occurring across frames. As a result, 3D CNNs can unravel the complex tapestry of actions and interactions in video sequences.</p><h4 id="heading-applications-of-3d-cnns">Applications of 3D CNNs</h4><p><strong>Beyond Static Frames</strong>: The ability of 3D CNNs to interpret time makes them incredibly powerful for a range of applications. This includes action recognition in videos, where understanding the sequence of movements is key, and medical imaging, where temporal changes in 3D scans can indicate crucial health information. In each of these areas, 3D CNNs offer a more comprehensive understanding by considering the evolution of visual data over time.</p><p><strong>Challenges and Opportunities</strong>: The shift to 3D CNNs, however, is not without its challenges. The addition of the temporal dimension increases the computational complexity significantly. Additionally, training 3D CNNs requires not only larger datasets but also datasets that accurately represent temporal variations.</p><h3 id="heading-my-research-odyssey-with-3d-cnns"><strong>My Research Odyssey with 3D CNNs</strong></h3><h4 id="heading-transitioning-to-spatio-temporal-analysis">Transitioning to Spatio-Temporal Analysis</h4><p><strong>Initial Exploration</strong>: My journey into 3D CNNs began as an extension of my work with 2D CNNs, where I had focused on spatial feature extraction from static images. The transition to 3D CNNs marked a significant shift towards integrating the temporal dimension. My initial challenge lay in comprehending the intricacies of 3D convolutional layers understanding how they extend the spatial interpretation of 2D CNNs to include temporal relationships.</p><p>The architectural nuances of 3D CNNs, such as the incorporation of time as a third dimension in convolutional operations, presented both a conceptual and practical learning curve. This was not merely about adapting to a new technique but rethinking the approach to data representation and processing.</p><h4 id="heading-navigating-data-complexity">Navigating Data Complexity</h4><p><strong>Data Preprocessing and Management</strong>: One of the most formidable challenges I faced was the preprocessing of video data. Unlike static images, video data comes with additional complexities like variable frame rates, diverse resolutions, and most crucially, a substantial increase in data volume. Developing an efficient preprocessing pipeline that could handle such diversity and volume was paramount. This involved not only frame extraction and resizing but also temporal sampling strategies to capture relevant motion information without overburdening the computational process.</p><p><strong>Architectural Design and Computational Considerations</strong>: Designing the architecture of a 3D CNN requires a delicate balance. The model had to be sophisticated enough to capture intricate temporal patterns without becoming computationally infeasible. This entailed an iterative process of model design, where each layer's parameters were carefully calibrated to maximize learning while minimizing computational costs. The extended training durations and heightened resource demands of 3D CNNs necessitated a more strategic approach, leveraging distributed computing and optimizing algorithms for efficiency.</p><h4 id="heading-gleaning-insights-and-developing-solutions">Gleaning Insights and Developing Solutions</h4><p><strong>Performance Optimization</strong>: In pursuit of optimal performance, I explored a variety of architectural tweaks and parameter adjustments. Strategies such as modifying stride and kernel size in convolutional layers, and incorporating advanced techniques like transfer learning, played a crucial role in surmounting the limitations imposed by the sheer scale of video data.</p><p><strong>Combating Overfitting</strong>: The increased parameter count in 3D CNNs heightened the risk of overfitting. To mitigate this, I implemented a combination of regularization strategies, data augmentation techniques, and dropout layers. These measures were critical in ensuring that the model generalized well, rather than merely memorizing the training data.</p><h4 id="heading-reflecting-on-the-journey">Reflecting on the Journey</h4><p>Working with 3D CNNs reinforced the virtue of patience. The field of 3D convolutional analysis is still burgeoning, with much left to explore and understand. Navigating this terrain often required an iterative, trial-and-error approach, underscoring the importance of resilience in research.</p><h3 id="heading-charting-future-pathways"><strong>Charting Future Pathways</strong></h3><h4 id="heading-advancing-3d-cnn-research">Advancing 3D CNN Research</h4><p><strong>Harnessing Technological Growth</strong>: As computational capabilities continue to advance and datasets grow both in size and complexity, the potential applications of 3D CNNs are set to broaden significantly. I am particularly intrigued by the prospects in domains like augmented reality, where interpreting both spatial and temporal information is key to creating immersive experiences.</p><p><strong>Ongoing Exploration</strong>: My foray into 3D CNNs is an ongoing chapter in my academic journey. I'm keen to delve deeper into novel architectures and apply these models across a wider spectrum of applications. The ultimate goal is to push the frontiers of computer vision and contribute to the development of systems that can interact with our dynamic world more intelligently and intuitively.</p><h3 id="heading-sneak-peek-into-the-next-blog-post"><strong>Sneak Peek into the Next Blog Post</strong></h3><p><strong>Intuition, Mathematics, Code: A Technical Deep Dive into 3D CNNs</strong></p><p>In my upcoming blog post, we'll take a technical deep dive into the world of 3D Convolutional Neural Networks. I'll unravel the intuition behind these sophisticated models, illuminating how they interpret not just the visual cues in static images but also the temporal dynamics in videos. We'll delve into the mathematics that underpins these networks, demystifying how they learn and process information across both space and time. Expect to see detailed discussions on model architecture, accompanied by snippets of code that bring these concepts to life. Whether you're keen on understanding the nuts and bolts of 3D convolution operations or interested in the practical aspects of implementing these models in PyTorch, the next post promises to be a treasure trove of insights.</p><p>From discussing the nuances of kernel size and stride in 3D convolutions to exploring strategies for optimizing network performance, we will cover a spectrum of topics that will cater to both beginners and seasoned practitioners in the field. The goal is to provide you with a comprehensive understanding of 3D CNNs that balance theoretical depth with practical applicability. So, stay tuned for an enriching journey into the technical heart of 3D CNNs!</p><p><a target="_blank" href="https://github.com/exponentialR/3DCNN">To take a sneek peak into an experimental 3D CNN Architecture please check here</a></p>]]><![CDATA[<p><em>If you would like to dive right into the code</em> <a target="_blank" href="https://github.com/exponentialR/3DCNN"><em>please see here</em></a></p><p>In the ever-evolving landscape of computer vision, the transition from static imagery to the dynamic world of videos marks a significant leap to understanding the dynamicity of our world. As someone who has spent years unravelling the <code>mysteries</code> hidden in static images using 2D Convolutional Neural Networks, I find myself at an exciting juncture in my PhD journey - diving into the spatio-temporal context. The shift from analyzing still frames to understanding the intricate sequences of video data is not just a step forward in complexity, but a step towards the realm brimming with untapped potential and unexplored challenges. My exploration into this domain is driven by a simple yet profound realization- our world is not static. It is a dynamic tapestry where each moment is a continuation of the last, and a stroy unfolding in time.</p><p>In my previous work, 2DCNNs served as a powerful tool, adept at capturing spatial hierarchies and patterns within images, exploring the intricate relationship between pixels, and encoding subtle patterns via edges and corners. However, as I delve into video data, I find myself in need of a more sophisticated ally - one capable of understanding not just the spatial but also the temporal nuances of visual data. This is exactly where 3D Convolutional Neural Networks (3D CNNs) enter the picture.</p><p>My shift to 3D CNNs is more than just an academic interest; it is a journey towards a deeper understanding of how we can enable machines to perceive and interpret the world in its full dynamism, our stochasticity, and uncertainties even in seconds of actions- much like we do. Every video clip is a symphony of motions, emotions, and interactions, with multilayers of subtle meanings- and of course 3D CNNs promise to be the key to deciphering these complex sequences. As I embark on this journey, I am not just to expand the boundaries of my knowledge, but also to contribute to the broader field of computer vision, pushing towards systems that can understand and interact with the world in richer, more meaningful ways.</p><p>In subsequent blog posts, I invite you with me through the explorations of 3DCNNs - from the core concepts that distinguish them from their 2D counterparts to the intricate challenges and learning curves I have encountered while applying them to video data. Whether you are a seasoned expert in the field, a beginner, a grad student, or a curious onlooker, I hope to offer insights and experiences that resonate with this domain.</p><h3 id="heading-background-and-core-concepts"><strong>Background and Core Concepts</strong></h3><h4 id="heading-the-evolution-from-2d-to-3d-cnns">The Evolution from 2D to 3D CNNs</h4><p><strong>Understanding CNNs</strong>: Convolutional Neural Networks (CNNs) have been the cornerstone of image analysis in computer vision for years. Traditional 2D CNNs are adept at processing static imageslearning spatial hierarchies and patterns by applying filters that capture various aspects of the image, such as edges, textures, and shapes. If you would like to find out more about 2D CNN, please refer to my <a target="_blank" href="https://github.com/exponentialR/SamuelAdebayo/tree/main/ML-Slides">slides and labs here</a></p><p><strong>Limitation in Capturing Temporal Information</strong>: While 2D CNNs excel in spatial understanding, they fall short in comprehending temporal dynamics, which is crucial when dealing with video data. Videos are essentially sequences of frames, where each frame is tied to its predecessor and successor, creating a temporal continuity that 2D CNNs cannot capture.</p><h4 id="heading-the-emergence-of-3d-cnns">The Emergence of 3D CNNs</h4><p><strong>Introduction to 3D CNNs</strong>: This is where 3D Convolutional Neural Networks change the game. Unlike their 2D counterparts, 3D CNNs are designed to understand both spatial and temporal features. They achieve this by adding an additional dimensiontimeto the convolutional process.</p><p><strong>How 3D CNNs Work</strong>: In a 3D CNN, the convolutional filters extend along three dimensionsheight, width, and depth (time). This allows the network to not only learn from the spatial content of each frame but also gain insights into the motion and changes occurring across frames. As a result, 3D CNNs can unravel the complex tapestry of actions and interactions in video sequences.</p><h4 id="heading-applications-of-3d-cnns">Applications of 3D CNNs</h4><p><strong>Beyond Static Frames</strong>: The ability of 3D CNNs to interpret time makes them incredibly powerful for a range of applications. This includes action recognition in videos, where understanding the sequence of movements is key, and medical imaging, where temporal changes in 3D scans can indicate crucial health information. In each of these areas, 3D CNNs offer a more comprehensive understanding by considering the evolution of visual data over time.</p><p><strong>Challenges and Opportunities</strong>: The shift to 3D CNNs, however, is not without its challenges. The addition of the temporal dimension increases the computational complexity significantly. Additionally, training 3D CNNs requires not only larger datasets but also datasets that accurately represent temporal variations.</p><h3 id="heading-my-research-odyssey-with-3d-cnns"><strong>My Research Odyssey with 3D CNNs</strong></h3><h4 id="heading-transitioning-to-spatio-temporal-analysis">Transitioning to Spatio-Temporal Analysis</h4><p><strong>Initial Exploration</strong>: My journey into 3D CNNs began as an extension of my work with 2D CNNs, where I had focused on spatial feature extraction from static images. The transition to 3D CNNs marked a significant shift towards integrating the temporal dimension. My initial challenge lay in comprehending the intricacies of 3D convolutional layers understanding how they extend the spatial interpretation of 2D CNNs to include temporal relationships.</p><p>The architectural nuances of 3D CNNs, such as the incorporation of time as a third dimension in convolutional operations, presented both a conceptual and practical learning curve. This was not merely about adapting to a new technique but rethinking the approach to data representation and processing.</p><h4 id="heading-navigating-data-complexity">Navigating Data Complexity</h4><p><strong>Data Preprocessing and Management</strong>: One of the most formidable challenges I faced was the preprocessing of video data. Unlike static images, video data comes with additional complexities like variable frame rates, diverse resolutions, and most crucially, a substantial increase in data volume. Developing an efficient preprocessing pipeline that could handle such diversity and volume was paramount. This involved not only frame extraction and resizing but also temporal sampling strategies to capture relevant motion information without overburdening the computational process.</p><p><strong>Architectural Design and Computational Considerations</strong>: Designing the architecture of a 3D CNN requires a delicate balance. The model had to be sophisticated enough to capture intricate temporal patterns without becoming computationally infeasible. This entailed an iterative process of model design, where each layer's parameters were carefully calibrated to maximize learning while minimizing computational costs. The extended training durations and heightened resource demands of 3D CNNs necessitated a more strategic approach, leveraging distributed computing and optimizing algorithms for efficiency.</p><h4 id="heading-gleaning-insights-and-developing-solutions">Gleaning Insights and Developing Solutions</h4><p><strong>Performance Optimization</strong>: In pursuit of optimal performance, I explored a variety of architectural tweaks and parameter adjustments. Strategies such as modifying stride and kernel size in convolutional layers, and incorporating advanced techniques like transfer learning, played a crucial role in surmounting the limitations imposed by the sheer scale of video data.</p><p><strong>Combating Overfitting</strong>: The increased parameter count in 3D CNNs heightened the risk of overfitting. To mitigate this, I implemented a combination of regularization strategies, data augmentation techniques, and dropout layers. These measures were critical in ensuring that the model generalized well, rather than merely memorizing the training data.</p><h4 id="heading-reflecting-on-the-journey">Reflecting on the Journey</h4><p>Working with 3D CNNs reinforced the virtue of patience. The field of 3D convolutional analysis is still burgeoning, with much left to explore and understand. Navigating this terrain often required an iterative, trial-and-error approach, underscoring the importance of resilience in research.</p><h3 id="heading-charting-future-pathways"><strong>Charting Future Pathways</strong></h3><h4 id="heading-advancing-3d-cnn-research">Advancing 3D CNN Research</h4><p><strong>Harnessing Technological Growth</strong>: As computational capabilities continue to advance and datasets grow both in size and complexity, the potential applications of 3D CNNs are set to broaden significantly. I am particularly intrigued by the prospects in domains like augmented reality, where interpreting both spatial and temporal information is key to creating immersive experiences.</p><p><strong>Ongoing Exploration</strong>: My foray into 3D CNNs is an ongoing chapter in my academic journey. I'm keen to delve deeper into novel architectures and apply these models across a wider spectrum of applications. The ultimate goal is to push the frontiers of computer vision and contribute to the development of systems that can interact with our dynamic world more intelligently and intuitively.</p><h3 id="heading-sneak-peek-into-the-next-blog-post"><strong>Sneak Peek into the Next Blog Post</strong></h3><p><strong>Intuition, Mathematics, Code: A Technical Deep Dive into 3D CNNs</strong></p><p>In my upcoming blog post, we'll take a technical deep dive into the world of 3D Convolutional Neural Networks. I'll unravel the intuition behind these sophisticated models, illuminating how they interpret not just the visual cues in static images but also the temporal dynamics in videos. We'll delve into the mathematics that underpins these networks, demystifying how they learn and process information across both space and time. Expect to see detailed discussions on model architecture, accompanied by snippets of code that bring these concepts to life. Whether you're keen on understanding the nuts and bolts of 3D convolution operations or interested in the practical aspects of implementing these models in PyTorch, the next post promises to be a treasure trove of insights.</p><p>From discussing the nuances of kernel size and stride in 3D convolutions to exploring strategies for optimizing network performance, we will cover a spectrum of topics that will cater to both beginners and seasoned practitioners in the field. The goal is to provide you with a comprehensive understanding of 3D CNNs that balance theoretical depth with practical applicability. So, stay tuned for an enriching journey into the technical heart of 3D CNNs!</p><p><a target="_blank" href="https://github.com/exponentialR/3DCNN">To take a sneek peak into an experimental 3D CNN Architecture please check here</a></p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1700002795016/63ccc19d-6cfc-4328-a73b-43c86ed80672.png<![CDATA[Camera Calibration Demystified: Part 2 - Applications and Lens Distortion]]>https://samueladebayo.com/camera-calibration-demystified-part-2-applications-and-lens-distortionhttps://samueladebayo.com/camera-calibration-demystified-part-2-applications-and-lens-distortionSun, 22 Oct 2023 19:13:32 GMT<![CDATA[<h3 id="heading-introduction">Introduction</h3><p>In <a target="_blank" href="https://samueladebayo.com/camera-calibration-part-1">Part 1 of this series on camera calibration</a>, we laid the groundwork by exploring the fundamental principles that govern how cameras translate the 3D world into a 2D image. We delved into camera models and the intrinsic and extrinsic parameters that play a vital role in this transformation. But that was merely scratching the surface.</p><p>In this second instalment, I'm going to broaden the scope significantly. We'll venture into the critical importance of camera calibration across various real-world applicationsfrom robotics to autonomous vehicles and even the arts. We'll also uncover the lens distortions that could potentially mar your images and then look at the mathematical equations behind them.</p><p>So if you've ever wondered how self-driving cars make sense of their environment, how augmented reality applications manage to superimpose digital elements so naturally, or even questioned the mechanics behind your DSLR's crisp photos, you're in for a treat.</p><h3 id="heading-reasons-for-calibration">Reasons for Calibration</h3><p>The importance of camera calibration extends far beyond academic interestit plays a critical role in various real-world applications. I'll investigate why camera calibration is indispensable in key areas in this section.</p><h4 id="heading-1-robotics-and-automation">1. Robotics and Automation</h4><p>In robotics, precision is not just a nice-to-have - it is the name of the game. Whether the robots are on bustling factory floors or those designed to help people in their homes- these machines will have to 'know' what is around them and where exactly it is located. This is even more true for robots rocking machine perception tech, which allows them to interpret and make sense of their surrounding. Getting the camera calibration right in settings like these is often a big deal.</p><p>Take a factory assembly line, for example. Robots are often kitted out with cameras and machine perception algorithms to identify parts or objects. Mess up the camera calibration, and you're in for a world of trouble. Imagine a robot misjudging the position of a piece it's supposed to pick. That's the kind of error that can start a chain reaction of problems. This is not just about assembly lines or specific tasks, either. Suppose a robot is to pick up an item and place it somewhere specific - a well-calibrated camera ensures that the robot's actions are spot-on with what it is seeing. This is not limited to the task at hand but to the robot's ability to navigate more complex situations. Think about it: a finely calibrated camera can act like a robot's "sixth sense", allowing for on-the-fly adjustments during the job.</p><p>To sum it up, nailing camera calibration in robotics and automation isn't just a good practice; it is a must. Whether for aiding complex tasks or helping a robot safely navigate an unstructured environment, getting the camera settings right can either make or break the whole operation.</p><h4 id="heading-2-autonomous-vehicles">2. Autonomous Vehicles</h4><p>We are on the brink of a game-changer - self-driving cars are about to become a common sight on our roads. But let us not forget, the tech making this possible is anything but simple. At the core, we have advanced vision systems that let these vehicles 'see the world around them. However, is not always enough; these systems must also be spot-on when interpreting this visual data for real-time decision-making. This is precisely where camera calibration comes in and becomes a critical piece of the puzzle.</p><p>For a minute, think about the challenges of driving autonomously. Cars must navigate a world filled with other vehicles, pedestrians, and other unpredictable elements. Oh, get the camera calibration wrong, and you are asking for trouble. Results of miscalibration? We are discussing potentially misjudging the distance to the car in front, which could translate to insufficient time to brake or even a full-on collision.</p><p>Here is the kicker: autonomous cars rely on many machine vision tasks - such as detecting obstacles, understanding road signs, or even interpreting road markings. Many of these cars would require more than one camera, each serving a specific purpose. Hence, calibrating each camera is not a one-off job- it is about ensuring all these cameras work harmoniously.</p><h4 id="heading-3-augmented-and-virtual-reality">3. Augmented and Virtual Reality</h4><p>Okay, let's talk AR and VR. These are realms where the line between the digital and the real world gets blurry. Whether overlaying virtual furniture in your real living room or immersing yourself in a completely digital world, the experience has to feel real. That's why camera calibration is a big deal in AR and VR tech.</p><p>Think about it. You put on a VR headset and step into a virtual world. You move your head, and the perspective changes perfectly in sync. That's not magicit's precise calibration. If the camera's off even by a little, you might start to feel motion sickness or have a subpar experience. That's the last thing you want when battling space pirates or exploring a virtual museum.</p><p>Now, switch gears to AR. Imagine using an app on your smartphone to visualize how a new sofa would look in your living room. The app has to blend digital objects with the real world smoothly. If the camera calibration is off, that sofa might look like it's floating in mid-air or sinking into the floor. Not the best way to make a buying decision, right?</p><p>And let's not forget about more advanced applications. For example, getting the camera calibration wrong could be life and death in medical AR. Surgeons often use AR tech for guided procedures. In scenarios like this, the calibration needs to be absolutely spot-on for accurate guidance and successful outcomes.</p><p>So, all in all, whether you're gaming, shopping, or even performing surgery, camera calibration in AR and VR isn't just about enhancing the experienceit's about making it possible in the first place.</p><h4 id="heading-4-film-and-photography">4. Film and Photography</h4><p>Let's get into film and photography, where camera calibration isn't just about the techit's also about the art. In settings that demand a heavy dose of scientific rigor, like wildlife documentaries or high-speed sports action, getting your camera settings right is non-negotiable. Picture this: you're shooting a documentary on migratory birds. A well-calibrated camera lets you capture beautiful shots and accurate data on how fast and high these birds fly. That's adding a layer of scientific credibility to your storytelling.</p><p>But hey, it's not all about the numbers. Camera calibration also plays a starring role in the artistic side of things. Take landscape photography, for instance. You want those mountain ranges and valleys to look as majestic in the photo as they do in real life. A calibrated camera ensures that the proportions and spatial relationships within the frame are just right, enhancing your shots' emotional impact and narrative quality.</p><p>And let's not forget the controlled chaos of a studio setting. Calibration is your best friend, whether you're doing product photography, snapping high-fashion looks, or capturing fine art reproductions. In essence, camera calibration in film and photography is more than a behind-the-scenes technicality; it's a linchpin that can elevate your work from good to great. It's not just about getting the colour balance or the focus right; it's about capturing the subject's soul, be it a fast-paced sporting event or a still life. When your camera is finely tuned, your work speaks volumesconveying scientific facts or evoking deep emotions.</p><h3 id="heading-types-of-distortion">Types of Distortion</h3><p>Regarding camera calibration, addressing distortions is not just a side quest - it is the main objective. Distortions are discrepancies between the captured image and the real-world scene, affecting the accuracy of the camera's representations. Distortions can be characterised as deviations from the ideal imaging model, where rays from a single point in three-dimensional space converge at a single point on the imaging sensor. Numerous factors contribute to distortions, including lens shape, refractive index variations, and manufacturing imperfections. These distortions have various types, each with a mathematical model and correction method. Understanding these distortions is pivotal for calibrating the camera to achieve high accuracy in multiple applications.</p><h4 id="heading-a-radial-distortions">A. Radial Distortions</h4><p><strong> 1. Barrel Distortions:</strong> Barrel distortion is a sub-type of radial distortion, where straight lines appear to curve outward from the centre of the image.</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697994379079/90d52331-74c4-4ae4-aa76-f363b8e7dfdf.png" alt class="image--center mx-auto" /></p><p>The image magnification decreases with distance from the optical axis. This causes straight lines near the edge of the field to bend inward, resembling the shape of a barrel. This type of distortion is common in wide-angle lenses.</p><p><strong> 2. Pincushion Distortion:</strong> Conversely, image magnification increases with the distance from the optical axis in pincushion distortion. The result is that straight lines bend outward from the centre, akin to a pincushion</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697994279778/0eda4134-2def-4466-a282-d03371951dcc.png" alt class="image--center mx-auto" /></p><p>Mathematically, a unified model can represent both barrel and pincushion distortions, often employing higher-order polynomials, which is particularly useful when working with more complicated lens systems. The general formula is:</p><p>$$\begin{align*} x' &= x\left(1 + k_1 r^2 + k_2 r^4 + \ldots\right) \\ y' &= y\left(1 + k_1 r^2 + k_2 r^4 + \ldots\right) \end{align*}$$</p><p>Here, (<strong><em>x</em></strong>, <strong><em>y</em></strong>) are the original coordinates, (<strong><em>x'</em></strong>, <strong><em>y'</em></strong>) are the distorted coordinates, <strong><em>k<sub>1</sub>, k<sub>2</sub></em></strong>, ... are the distortion coefficients, and r is the radial distance from the centre of the image, calculated as:</p><p>$$r = \sqrt{x^2 + y^2}$$</p><p>In this general model:</p><ul><li><p>A positive <strong><em>k<sub>1</sub></em></strong> will produce pincushion distortion, as lines will curve outward.</p></li><li><p>A negative ***k<sub>1</sub>*** will produce barrel distortion, where lines curve inward towards the centre.</p></li><li><p>Higher-order terms like ***k<sub>2</sub>*** allow for more complex distortion patterns, which might be observed in higher-end or more flawed lens systems.</p></li></ul><p>The model is extendable to as many terms as necessary, but in practice, most systems are sufficiently modelled using just <strong><em>k<sub>1</sub></em></strong> and sometimes <strong><em>k<sub>2</sub></em></strong>.</p><h4 id="heading-b-tangential-distortions">B. Tangential Distortions</h4><p>These distortions occur when the lens and the imaging plane are not parallel. Tangential distortions usually shift the image in a direction orthogonal to the radial distortions and can make the image look tilted or skewed. While radial distortions affect the image in a radially outward direction from the centre, tangential distortions act orthogonally to them. This means that they can make the image appear tilted or skewed, effectively moving the distorted image points horizontally and vertically in a way unrelated to their distance from the optical axis.</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697995709722/f516bc89-ba0d-4105-b706-e7b8ae0d9bda.png" alt class="image--center mx-auto" /></p><p>Mathematically, tangential distortion can be expressed as:</p><p>$$\begin{align*} x' &= x + \left(2p_1 xy + p_2 (r^2 + 2x^2)\right) \\ y' &= y + \left(p_1 (r^2 + 2y^2) + 2p_2 xy\right) \end{align*}$$</p><p>Here, <strong><em>x'</em></strong> and <strong><em>y'</em></strong> are the distorted coordinates. <strong><em>x</em></strong> and <strong><em>y</em></strong> are the original coordinates, and <strong><em>r</em></strong> is the radial distance from the origin, calculated as <strong><em>r</em></strong> = <strong><em>(x<sup>2</sup> + y<sup>2</sup>)</em></strong>.</p><p>The coefficients <strong><em>p<sub>1</sub></em></strong> and <strong><em>p<sub>2</sub></em></strong> are the tangential distortion coefficients. These terms aim to correct the tilt in the lens and bring the captured image closer to what would be charged if the lens were perfectly aligned. By adjusting the coefficients during the camera calibration process, one can minimize the effects of tangential distortions.</p><h3 id="heading-conclusion">Conclusion</h3><p>In this second instalment, we've delved deeper into the reasons for camera calibration across various industries, touched on different types of distortions, and hinted at the mathematics involved. However, we've only scratched the surface. In Part 3, we'll dive into the heart of the mathematics that makes accurate camera calibration possible. From optimization problems to factoring in distortions, we'll explore how all these elements combine to create a robust camera model. Stay tuned!</p><h3 id="heading-references">References</h3><p>For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:</p><p>[1] <a target="_blank" href="https://github.com/exponentialR/SamuelAdebayo/tree/main/CameraCalibration">Codes for distortion plots</a></p><p>[2] Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman</p><p>[2] <a target="_blank" href="https://web.stanford.edu/class/cs231a/course_notes/01-camera-models.pdf"><strong>Stanford CS231A: Camera Models</strong></a></p>]]><![CDATA[<h3 id="heading-introduction">Introduction</h3><p>In <a target="_blank" href="https://samueladebayo.com/camera-calibration-part-1">Part 1 of this series on camera calibration</a>, we laid the groundwork by exploring the fundamental principles that govern how cameras translate the 3D world into a 2D image. We delved into camera models and the intrinsic and extrinsic parameters that play a vital role in this transformation. But that was merely scratching the surface.</p><p>In this second instalment, I'm going to broaden the scope significantly. We'll venture into the critical importance of camera calibration across various real-world applicationsfrom robotics to autonomous vehicles and even the arts. We'll also uncover the lens distortions that could potentially mar your images and then look at the mathematical equations behind them.</p><p>So if you've ever wondered how self-driving cars make sense of their environment, how augmented reality applications manage to superimpose digital elements so naturally, or even questioned the mechanics behind your DSLR's crisp photos, you're in for a treat.</p><h3 id="heading-reasons-for-calibration">Reasons for Calibration</h3><p>The importance of camera calibration extends far beyond academic interestit plays a critical role in various real-world applications. I'll investigate why camera calibration is indispensable in key areas in this section.</p><h4 id="heading-1-robotics-and-automation">1. Robotics and Automation</h4><p>In robotics, precision is not just a nice-to-have - it is the name of the game. Whether the robots are on bustling factory floors or those designed to help people in their homes- these machines will have to 'know' what is around them and where exactly it is located. This is even more true for robots rocking machine perception tech, which allows them to interpret and make sense of their surrounding. Getting the camera calibration right in settings like these is often a big deal.</p><p>Take a factory assembly line, for example. Robots are often kitted out with cameras and machine perception algorithms to identify parts or objects. Mess up the camera calibration, and you're in for a world of trouble. Imagine a robot misjudging the position of a piece it's supposed to pick. That's the kind of error that can start a chain reaction of problems. This is not just about assembly lines or specific tasks, either. Suppose a robot is to pick up an item and place it somewhere specific - a well-calibrated camera ensures that the robot's actions are spot-on with what it is seeing. This is not limited to the task at hand but to the robot's ability to navigate more complex situations. Think about it: a finely calibrated camera can act like a robot's "sixth sense", allowing for on-the-fly adjustments during the job.</p><p>To sum it up, nailing camera calibration in robotics and automation isn't just a good practice; it is a must. Whether for aiding complex tasks or helping a robot safely navigate an unstructured environment, getting the camera settings right can either make or break the whole operation.</p><h4 id="heading-2-autonomous-vehicles">2. Autonomous Vehicles</h4><p>We are on the brink of a game-changer - self-driving cars are about to become a common sight on our roads. But let us not forget, the tech making this possible is anything but simple. At the core, we have advanced vision systems that let these vehicles 'see the world around them. However, is not always enough; these systems must also be spot-on when interpreting this visual data for real-time decision-making. This is precisely where camera calibration comes in and becomes a critical piece of the puzzle.</p><p>For a minute, think about the challenges of driving autonomously. Cars must navigate a world filled with other vehicles, pedestrians, and other unpredictable elements. Oh, get the camera calibration wrong, and you are asking for trouble. Results of miscalibration? We are discussing potentially misjudging the distance to the car in front, which could translate to insufficient time to brake or even a full-on collision.</p><p>Here is the kicker: autonomous cars rely on many machine vision tasks - such as detecting obstacles, understanding road signs, or even interpreting road markings. Many of these cars would require more than one camera, each serving a specific purpose. Hence, calibrating each camera is not a one-off job- it is about ensuring all these cameras work harmoniously.</p><h4 id="heading-3-augmented-and-virtual-reality">3. Augmented and Virtual Reality</h4><p>Okay, let's talk AR and VR. These are realms where the line between the digital and the real world gets blurry. Whether overlaying virtual furniture in your real living room or immersing yourself in a completely digital world, the experience has to feel real. That's why camera calibration is a big deal in AR and VR tech.</p><p>Think about it. You put on a VR headset and step into a virtual world. You move your head, and the perspective changes perfectly in sync. That's not magicit's precise calibration. If the camera's off even by a little, you might start to feel motion sickness or have a subpar experience. That's the last thing you want when battling space pirates or exploring a virtual museum.</p><p>Now, switch gears to AR. Imagine using an app on your smartphone to visualize how a new sofa would look in your living room. The app has to blend digital objects with the real world smoothly. If the camera calibration is off, that sofa might look like it's floating in mid-air or sinking into the floor. Not the best way to make a buying decision, right?</p><p>And let's not forget about more advanced applications. For example, getting the camera calibration wrong could be life and death in medical AR. Surgeons often use AR tech for guided procedures. In scenarios like this, the calibration needs to be absolutely spot-on for accurate guidance and successful outcomes.</p><p>So, all in all, whether you're gaming, shopping, or even performing surgery, camera calibration in AR and VR isn't just about enhancing the experienceit's about making it possible in the first place.</p><h4 id="heading-4-film-and-photography">4. Film and Photography</h4><p>Let's get into film and photography, where camera calibration isn't just about the techit's also about the art. In settings that demand a heavy dose of scientific rigor, like wildlife documentaries or high-speed sports action, getting your camera settings right is non-negotiable. Picture this: you're shooting a documentary on migratory birds. A well-calibrated camera lets you capture beautiful shots and accurate data on how fast and high these birds fly. That's adding a layer of scientific credibility to your storytelling.</p><p>But hey, it's not all about the numbers. Camera calibration also plays a starring role in the artistic side of things. Take landscape photography, for instance. You want those mountain ranges and valleys to look as majestic in the photo as they do in real life. A calibrated camera ensures that the proportions and spatial relationships within the frame are just right, enhancing your shots' emotional impact and narrative quality.</p><p>And let's not forget the controlled chaos of a studio setting. Calibration is your best friend, whether you're doing product photography, snapping high-fashion looks, or capturing fine art reproductions. In essence, camera calibration in film and photography is more than a behind-the-scenes technicality; it's a linchpin that can elevate your work from good to great. It's not just about getting the colour balance or the focus right; it's about capturing the subject's soul, be it a fast-paced sporting event or a still life. When your camera is finely tuned, your work speaks volumesconveying scientific facts or evoking deep emotions.</p><h3 id="heading-types-of-distortion">Types of Distortion</h3><p>Regarding camera calibration, addressing distortions is not just a side quest - it is the main objective. Distortions are discrepancies between the captured image and the real-world scene, affecting the accuracy of the camera's representations. Distortions can be characterised as deviations from the ideal imaging model, where rays from a single point in three-dimensional space converge at a single point on the imaging sensor. Numerous factors contribute to distortions, including lens shape, refractive index variations, and manufacturing imperfections. These distortions have various types, each with a mathematical model and correction method. Understanding these distortions is pivotal for calibrating the camera to achieve high accuracy in multiple applications.</p><h4 id="heading-a-radial-distortions">A. Radial Distortions</h4><p><strong> 1. Barrel Distortions:</strong> Barrel distortion is a sub-type of radial distortion, where straight lines appear to curve outward from the centre of the image.</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697994379079/90d52331-74c4-4ae4-aa76-f363b8e7dfdf.png" alt class="image--center mx-auto" /></p><p>The image magnification decreases with distance from the optical axis. This causes straight lines near the edge of the field to bend inward, resembling the shape of a barrel. This type of distortion is common in wide-angle lenses.</p><p><strong> 2. Pincushion Distortion:</strong> Conversely, image magnification increases with the distance from the optical axis in pincushion distortion. The result is that straight lines bend outward from the centre, akin to a pincushion</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697994279778/0eda4134-2def-4466-a282-d03371951dcc.png" alt class="image--center mx-auto" /></p><p>Mathematically, a unified model can represent both barrel and pincushion distortions, often employing higher-order polynomials, which is particularly useful when working with more complicated lens systems. The general formula is:</p><p>$$\begin{align*} x' &= x\left(1 + k_1 r^2 + k_2 r^4 + \ldots\right) \\ y' &= y\left(1 + k_1 r^2 + k_2 r^4 + \ldots\right) \end{align*}$$</p><p>Here, (<strong><em>x</em></strong>, <strong><em>y</em></strong>) are the original coordinates, (<strong><em>x'</em></strong>, <strong><em>y'</em></strong>) are the distorted coordinates, <strong><em>k<sub>1</sub>, k<sub>2</sub></em></strong>, ... are the distortion coefficients, and r is the radial distance from the centre of the image, calculated as:</p><p>$$r = \sqrt{x^2 + y^2}$$</p><p>In this general model:</p><ul><li><p>A positive <strong><em>k<sub>1</sub></em></strong> will produce pincushion distortion, as lines will curve outward.</p></li><li><p>A negative ***k<sub>1</sub>*** will produce barrel distortion, where lines curve inward towards the centre.</p></li><li><p>Higher-order terms like ***k<sub>2</sub>*** allow for more complex distortion patterns, which might be observed in higher-end or more flawed lens systems.</p></li></ul><p>The model is extendable to as many terms as necessary, but in practice, most systems are sufficiently modelled using just <strong><em>k<sub>1</sub></em></strong> and sometimes <strong><em>k<sub>2</sub></em></strong>.</p><h4 id="heading-b-tangential-distortions">B. Tangential Distortions</h4><p>These distortions occur when the lens and the imaging plane are not parallel. Tangential distortions usually shift the image in a direction orthogonal to the radial distortions and can make the image look tilted or skewed. While radial distortions affect the image in a radially outward direction from the centre, tangential distortions act orthogonally to them. This means that they can make the image appear tilted or skewed, effectively moving the distorted image points horizontally and vertically in a way unrelated to their distance from the optical axis.</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697995709722/f516bc89-ba0d-4105-b706-e7b8ae0d9bda.png" alt class="image--center mx-auto" /></p><p>Mathematically, tangential distortion can be expressed as:</p><p>$$\begin{align*} x' &= x + \left(2p_1 xy + p_2 (r^2 + 2x^2)\right) \\ y' &= y + \left(p_1 (r^2 + 2y^2) + 2p_2 xy\right) \end{align*}$$</p><p>Here, <strong><em>x'</em></strong> and <strong><em>y'</em></strong> are the distorted coordinates. <strong><em>x</em></strong> and <strong><em>y</em></strong> are the original coordinates, and <strong><em>r</em></strong> is the radial distance from the origin, calculated as <strong><em>r</em></strong> = <strong><em>(x<sup>2</sup> + y<sup>2</sup>)</em></strong>.</p><p>The coefficients <strong><em>p<sub>1</sub></em></strong> and <strong><em>p<sub>2</sub></em></strong> are the tangential distortion coefficients. These terms aim to correct the tilt in the lens and bring the captured image closer to what would be charged if the lens were perfectly aligned. By adjusting the coefficients during the camera calibration process, one can minimize the effects of tangential distortions.</p><h3 id="heading-conclusion">Conclusion</h3><p>In this second instalment, we've delved deeper into the reasons for camera calibration across various industries, touched on different types of distortions, and hinted at the mathematics involved. However, we've only scratched the surface. In Part 3, we'll dive into the heart of the mathematics that makes accurate camera calibration possible. From optimization problems to factoring in distortions, we'll explore how all these elements combine to create a robust camera model. Stay tuned!</p><h3 id="heading-references">References</h3><p>For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:</p><p>[1] <a target="_blank" href="https://github.com/exponentialR/SamuelAdebayo/tree/main/CameraCalibration">Codes for distortion plots</a></p><p>[2] Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman</p><p>[2] <a target="_blank" href="https://web.stanford.edu/class/cs231a/course_notes/01-camera-models.pdf"><strong>Stanford CS231A: Camera Models</strong></a></p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1698002967308/c5c48c3b-2ac7-4b57-9b25-8f22d775a56b.png<![CDATA[Retrospective: Teaching Intro to Python Programming]]>https://samueladebayo.com/retrospective-teaching-intro-to-python-programminghttps://samueladebayo.com/retrospective-teaching-intro-to-python-programmingMon, 16 Oct 2023 05:03:40 GMT<![CDATA[<p>Hello, everyone! I had the privilege of teaching a programming class "Python Programming" course at Belfast Metropolitan College during Autumn 2022 and Winter of 2023. As we move into Autumn, I've decided to share the lecture slides and occasionally the recorded classes from this course every week.</p><p><a target="_blank" href="https://github.com/exponentialR/SamuelAdebayo/blob/main/Week1%20-%20Introduction.pdf">Week 1 Lecture Slides here</a></p><h4 id="heading-why-share-now">Why Share Now?</h4><p>Sharing educational resources has always been a way to democratise knowledge. Whether you're a student who took the course and wants to revisit the material or someone who's just getting started with Python, these resources will serve as a comprehensive guide.</p><h4 id="heading-what-did-week-1-cover">What Did Week 1 Cover?</h4><p>To give you a taste of what's to come, week one was all about laying the foundation:</p><h5 id="heading-module-introduction-we-discussed-what-the-course-aimed-to-achieve-and-why-python-is-an-invaluable-language-to-learn">Module Introduction: We discussed what the course aimed to achieve and why Python is an invaluable language to learn.</h5><h5 id="heading-introduction-to-computer-programming-the-course-kicked-off-by-laying-down-the-basics-of-computer-programming-and-its-relevance-in-todays-digital-landscape">Introduction to Computer Programming: The course kicked off by laying down the basics of computer programming and its relevance in today's digital landscape.</h5><h5 id="heading-programming-basics-students-were-introduced-to-the-fundamental-building-blocks-of-all-programming-languages">Programming Basics: Students were introduced to the fundamental building blocks of all programming languages.</h5><h5 id="heading-natural-language-vs-programming-language-a-comparative-look-at-how-our-everyday-language-differs-from-programming-languages-and-why-that-matters">Natural Language vs Programming Language: A comparative look at how our everyday language differs from programming languages and why that matters.</h5><p>Translators, Compilers, and Assemblers: An overview of the tools that make coding in Python possible and how they work.</p><h4 id="heading-what-to-expect">What to Expect?</h4><p>Each week, I'll post the slides corresponding to that week's topics. Occasionally, I will also share the recorded lectures for those who prefer a more interactive learning experience.</p><p>Whether you're a beginner in Python or looking to refresh your knowledge, stay tuned for weekly updates that will take you from the basics to more advanced topics. Don't forget to check back each week for new materials, and happy learning!</p>]]><![CDATA[<p>Hello, everyone! I had the privilege of teaching a programming class "Python Programming" course at Belfast Metropolitan College during Autumn 2022 and Winter of 2023. As we move into Autumn, I've decided to share the lecture slides and occasionally the recorded classes from this course every week.</p><p><a target="_blank" href="https://github.com/exponentialR/SamuelAdebayo/blob/main/Week1%20-%20Introduction.pdf">Week 1 Lecture Slides here</a></p><h4 id="heading-why-share-now">Why Share Now?</h4><p>Sharing educational resources has always been a way to democratise knowledge. Whether you're a student who took the course and wants to revisit the material or someone who's just getting started with Python, these resources will serve as a comprehensive guide.</p><h4 id="heading-what-did-week-1-cover">What Did Week 1 Cover?</h4><p>To give you a taste of what's to come, week one was all about laying the foundation:</p><h5 id="heading-module-introduction-we-discussed-what-the-course-aimed-to-achieve-and-why-python-is-an-invaluable-language-to-learn">Module Introduction: We discussed what the course aimed to achieve and why Python is an invaluable language to learn.</h5><h5 id="heading-introduction-to-computer-programming-the-course-kicked-off-by-laying-down-the-basics-of-computer-programming-and-its-relevance-in-todays-digital-landscape">Introduction to Computer Programming: The course kicked off by laying down the basics of computer programming and its relevance in today's digital landscape.</h5><h5 id="heading-programming-basics-students-were-introduced-to-the-fundamental-building-blocks-of-all-programming-languages">Programming Basics: Students were introduced to the fundamental building blocks of all programming languages.</h5><h5 id="heading-natural-language-vs-programming-language-a-comparative-look-at-how-our-everyday-language-differs-from-programming-languages-and-why-that-matters">Natural Language vs Programming Language: A comparative look at how our everyday language differs from programming languages and why that matters.</h5><p>Translators, Compilers, and Assemblers: An overview of the tools that make coding in Python possible and how they work.</p><h4 id="heading-what-to-expect">What to Expect?</h4><p>Each week, I'll post the slides corresponding to that week's topics. Occasionally, I will also share the recorded lectures for those who prefer a more interactive learning experience.</p><p>Whether you're a beginner in Python or looking to refresh your knowledge, stay tuned for weekly updates that will take you from the basics to more advanced topics. Don't forget to check back each week for new materials, and happy learning!</p>]]><![CDATA[Camera Calibration Demystified: Part 1 - Fundamentals and Models]]>https://samueladebayo.com/camera-calibration-part-1https://samueladebayo.com/camera-calibration-part-1Sun, 01 Oct 2023 13:19:52 GMT<![CDATA[<h3 id="heading-introduction">Introduction</h3><p>Imagine you're taking a photo of a building with your smartphone. You might notice that the lines of the building don't appear as straight as they do in real life, or the proportions seem slightly off. These are distortions, and discrepancies between the real-world objects and their representations in the image. Such distortions often occur due to the inherent limitations of camera lenses and sensors as they attempt to map a 3D world onto a 2D plane.</p><p>Camera calibration is the technique used to understand and correct these distortions. It's a fundamental process for achieving more accurate visual representations, especially in applications like augmented reality, robotics, and 3D reconstruction. In this first part of our series on camera calibration, we'll explore the foundational concepts and models that serve as the backbone of this technique. We'll delve into the intrinsic and extrinsic parameters that influence how a camera captures an image and discuss how these parameters can be determined to correct distortions. By the end of this post, you'll have a solid understanding of the principles behind camera calibration and its importance in various domains.</p><h3 id="heading-camera-models">Camera Models</h3><p>To understand the intricacies of camera imaging, it's useful to connect the dots with real-world applications. Take the example of a self-driving car, which relies on its camera to accurately gauge the dimensions and distances of surrounding elements like pedestrians, other vehicles, and road signs. Just as understanding the human eye's perception aids in comprehending our interaction with the 3D world, grasping the mechanics of a camera model enhances the precision of such measurements in automated systems. To unpack this further, let's engage in a thought experiment: envision a simple setup (See figure 1) where a small barrier with a pinhole is placed between a 3D object and a film. Light rays from the object pass through the pinhole to create an image on the film. This basic mechanism serves as the cornerstone for what is known as the pinhole camera model, a foundational concept that allows us to fine-tune the way cameras, like the one in a self-driving car, interpret the world.</p><h4 id="heading-pinhole-camera-model">Pinhole Camera Model</h4><h5 id="heading-the-setup"><strong>The setup</strong></h5><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162002489/2dc69ea2-1d6f-4ebf-9a41-4705332915d1.png" alt class="image--center mx-auto" /></p><center>Figure 1: The Pinhole Camera model [2]</center><p>In the pinhole model, consider a 3D coordinate system defined by unit vectors <strong><em>i</em></strong>, <strong><em>j</em></strong>, <strong><em>k</em></strong>. Place an object point P with coordinates (<strong><em>X, Y, Z</em></strong>) in this world. The camera's aperture is at the origin <strong>O</strong>, and the image plane (or film) is parallel to the i-j plane at a distance f along the k-axis. The film has a centre <strong><em>C'</em></strong>, and the projection of <strong><em>P</em></strong> onto the film is <strong><em>P'</em></strong> with 2D coordinates (<strong><em>x, y</em></strong>) - (Refer to Figure 1 and 2)</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162197761/f7c4f796-f04f-447d-9d58-d8a717da1edd.png" alt class="image--center mx-auto" /></p><center>Figure 2: A formal construction of the Pinhole Camera model [2]</center><h5 id="heading-the-mathematics"><strong>The Mathematics</strong></h5><p>To relate <strong><em>P</em></strong> and <strong><em>P'</em></strong>, we draw a line from <strong><em>P</em></strong> through the aperture <strong><em>O</em></strong>, intersecting the film at <strong><em>P'</em></strong>. The triangles <strong><em>POP'</em></strong> and <strong><em>OCP'</em></strong> are similar, which gives us:</p><p>$$\frac{x}{f} = \frac{X}{Z} \quad \text{and} \quad \frac{y}{f} = \frac{Y}{Z}$$</p><p>Solving for <strong><em>x</em></strong> and <strong><em>y</em></strong>, we get:</p><p>$$\begin{align*} x &= f \left( \frac{X}{Z} \right), \\ y &= f \left( \frac{Y}{Z} \right). \end{align*}$$</p><p>Here, f represents the focal length of the Camera.</p><h4 id="heading-lens-models">Lens Models</h4><p>While the pinhole model gives us an idealized perspective of image formation, real-world cameras use lenses to focus light. Lenses introduce additional complexities due to their shape, material, and how they bend light rays. These models account for additional factors like focal length, aperture, and lens distortions. Let's explore lens models to understand these intricacies.</p><h5 id="heading-the-setup-1"><strong>The Setup</strong></h5><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162656186/1a69c633-2f3e-47f6-a2d8-0acc5ac0c4c0.png" alt class="image--center mx-auto" /></p><center>Figure 3: The Simple lens model [2]</center><p>Like the pinhole model, lens models use a 3D coordinate system defined by <strong><em>i</em></strong>, <strong><em>j</em></strong>, <strong><em>k</em></strong>. However, instead of a pinhole at <strong><em>O</em></strong>, we have a lens. The image plane is still at a distance f along the <strong><em>k-axis</em></strong>, we denote the centre of this plane as <strong><em>C'</em></strong> (Refer to Figure 3 and 4)</p><h5 id="heading-the-mathematics-1"><strong>The Mathematics</strong></h5><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162458374/87f2a71a-1d96-493b-9c16-a7cb1659c7ae.png" alt class="image--center mx-auto" /></p><center>Figure 4: The Simple lens model: Relationship between points in the focal plane and the real world (3D)[2]</center><p>In lens models, we need to account for distortions introduced by the lens. These distortions are typically represented by <strong>d<sub>x</sub></strong> and <strong>d<sub>y</sub></strong>, affecting the x and y coordinates, respectively. The equations for x and y in lens models are:</p><p>$$\begin{align*} x &= f \left( \frac{X}{Z} \right) + d_x, \\ y &= f \left( \frac{Y}{Z} \right) + d_y. \end{align*}$$</p><p>In these equations, dx and dy are functions of <strong><em>X</em></strong>, <strong><em>Y</em></strong>, and <strong><em>Z</em></strong> and they represent the distortions introduced by the lens.</p><h3 id="heading-intrinsic-and-extrinsic-parameters">Intrinsic and Extrinsic Parameters</h3><p>So far, we've discussed the basic models that describe how cameras work and how they capture the 3D world onto a 2D plane. These models give us a high-level view but are generalized and often idealized. In practice, each camera has its unique characteristics that influence how it captures an image. These characteristics are captured by what are known as <strong>intrinsic</strong> and <strong>extrinsic</strong> parameters. While intrinsic parameters deal with the camera's own 'personality' or 'DNA' extrinsic parameters describe how the camera is positioned in space. Together, they offer a complete picture of a camera's behaviour, which is crucial for applications like 3D reconstruction, augmented reality, and robotics.</p><h4 id="heading-intrinsic-parameters">Intrinsic Parameters</h4><p>After Understanding the broad overview of intrinsic and extrinsic parameters, let's zoom in on the intrinsic parameters first. These parameters are unique to each camera and provide insights into how it captures images. While these parameters are generally considered constants for a specific camera, it is important to note that they can sometimes change. For instance, in cameras with variable focal lengths or adjustable sensors, intrinsic parameters can vary.</p><ol><li><h4 id="heading-optical-axis">Optical Axis</h4><p> The optical axis is essentially the line along which light travels into the camera to hit the sensor. In the idealized pinhole and lens models, it's the line that passes through the aperture (or lens centre) and intersects the image plane. It serves as a reference line for other measurements and parameters.</p></li><li><p><strong>Focal Length</strong> (<em>f</em> ): This is the distance between the lens and the image sensor. Knowing the focal length is crucial for estimating the distances and sizes of objects in images. It's also a key factor in determining the field of view and is usually represented in pixels.</p></li></ol><p>$$f = \alpha \times \text{sensor size} ,$$</p><p>$$\text{Here}, \alpha \space \text{is a constant that relates the physical sensor size to the size in pixels}$$</p><ol><li><strong>Principal Point</strong> ((<strong>c<sub>x</sub></strong>, <strong>c<sub>y</sub></strong>))<strong>:</strong> This is the point on the image plane where the optical axis intersects, it often lies near the centre of the image. it is crucial for tasks like image alignment and panorama stitching.</li></ol><p>$$\begin{align*} c_x &= \frac{Image Width}{2},\\ \\ c_y &= \frac{Image Height}{2}. \end{align*}$$</p><ol><li><strong>Skew Coefficient</strong> <strong>(s)</strong>: This parameter is responsible for any angle between the x and y pixel axes of the image plane. It is rarely encountered in modern-day cameras.</li></ol><p>$$s = 0 \quad \text{(usually)}$$</p><p>The intrinsic matrix denoted by <strong><em>K</em></strong> consolidates these parameters:</p><p>$$K = \begin{pmatrix} f_x & s & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix}$$</p><h5 id="heading-note-on-constancy"><strong>Note on Constancy</strong></h5><p>Although intrinsic parameters like the focal length and principal point are often treated as constants, especially in fixed or pre-calibrated camera setups, they can often change based on specific hardware configurations. For example, the focal length will vary in cameras with zoom capabilities. Therefore, in such special cases, recalibration may be necessary.</p><h5 id="heading-camera-with-zoom-capabilities"><strong>Camera with Zoom Capabilities</strong></h5><p>Cameras with zoom capabilities introduce an additional layer of complexity to the calibration process. While Zooming allows for better framing or focus on specific areas, it also changes intrinsic parameters like the focal length. This section will explore how to handle calibration in scenarios involving zoom.</p><p><strong><em>Calibration at Specific Zoom Levels</em></strong></p><p>When you calibrate a camera at a particular zoom level, the resulting intrinsic parameters are only accurate for that setting. If you continue to record or capture images at the same zoom level, these calibration parameters will remain valid.</p><p>$$K_{\text{zoom}} = \begin{pmatrix} f_{\text{zoom}} & s & c_x \\ 0 & f_{\text{zoom}} & c_y \\ 0 & 0 & 1 \end{pmatrix}$$</p><p>Here, <strong><em>K<sub>zoom</sub></em></strong> and <strong><em>f<sub>zoom</sub></em></strong> represent the camera matrix and focal length at the specific zoom level, respectively.</p><h4 id="heading-handling-zoom-changes">Handling Zoom Changes</h4><p>If you adjust the zoom after calibration, you have two main options:</p><ul><li><p><strong>Dynamic Calibration</strong>: Recalibrate the camera every time you change the zoom. This approach provides the highest accuracy but may be impractical for real-time applications due to computational costs.</p></li><li><p><strong>.Parameter Interpolation</strong>: If you've calibrated the camera at multiple zoom levels, you can interpolate the intrinsic parameters for new zoom settings. This is computationally efficient but might sacrifice some accuracy.</p></li></ul><p>Understanding intrinsic parameters is key for various computer vision tasks. For instance, in augmented reality, an accurate intrinsic matrix can drastically improve the realism and alignment of virtual objects in real-world scenes.</p><h4 id="heading-extrinsic-parameters">Extrinsic Parameters</h4><p>While intrinsic parameters define a camera's 'personality' by capturing its internal characteristics, extrinsic parameters tell the 'story' of the camera's interaction with the external world. These parameters, specifically the rotation matrix <strong><em>R</em></strong> and the translation vector <strong><em>T</em></strong>, are indispensable for mapping points from the camera's 2D image plane back to their original 3D coordinates in the world. This becomes particularly vital in scenarios involving multiple cameras or moving cameras, such as in robotics or autonomous vehicles. By accurately determining these extrinsic parameters, one can achieve high-precision tasks like 3D reconstruction and multi-camera scene analysis.</p><ol><li><p><strong>Rotation Matrix (<em>R</em>):</strong> This <strong><em>3x3</em></strong> matrix gives us the orientation of the camera in the world coordinate system. Specifically, it transforms coordinates from the world frame to the camera frame. For instance, if a drone equipped with a camera needs to align itself to capture a specific scene, the rotation matrix helps in determining the orientation the drone must assume.</p><p> The rotation matrix is usually denoted as <strong><em>R</em></strong> and takes the form:</p></li></ol><p>$$R = \begin{pmatrix} r_{11} & r_{12} & r_{13} \\ r_{21} & r_{22} & r_{23} \\ r_{31} & r_{32} & r_{33} \end{pmatrix}$$</p><p>The elements <strong>r<sub>11</sub></strong>, <strong>r<sub>12</sub></strong>, and <strong>r<sub>13</sub></strong>, .... <strong>r<sub>33</sub></strong> define the camera's orientation relative to the world's coordinate system. Each column of <strong><em>R</em></strong> essentially represents the unit vectors along the camera's local x, y, and z axes, but expressed in terms of the world coordinate system. For example, <strong>r<sub>11</sub></strong>, <strong>r<sub>21</sub></strong>, and <strong>r<sub>31</sub></strong> describe how much the world's x-component aligns with the camera's local x-axis.</p><ol><li><strong>Translation Vector (T):</strong> This <strong><em>3x1</em></strong> vector represents the position of the camera's optical centre in the world coordinate system. The translation vector is generally represented as:</li></ol><p>$$T = \begin{pmatrix} t_x \\ t_y \\ t_z \end{pmatrix}$$</p><p>The elements <strong>t<sub>x</sub></strong>, <strong>t<sub>y</sub></strong>, and <strong>t<sub>z</sub></strong> in the translation vector represents the position of the camera's optical centre in the world coordinate system. For instance, <strong>t<sub>x</sub></strong> is the distance from the world origin to the camera's optical centre along the world's x-axis, while <strong>t<sub>y</sub></strong> and <strong>t<sub>z</sub></strong> serve the same purpose along the y and z axes, respectively.</p><p>Computing R and T gives you a complete picture of the camera's pose in the world, including both orientation and position.</p><p>Together, the rotation matrix and the translation vector can be combined into a single <strong><em>3x4</em></strong> matrix, often represented as <strong><em>[R|T]</em></strong>:</p><p>$$[R|T] = \begin{pmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \end{pmatrix}$$</p><h3 id="heading-conclusion">Conclusion</h3><p>We've covered a lot of ground in this first instalment of our series on camera calibration, unravelling the complexities behind camera models and the intrinsic and extrinsic parameters that define them. These foundational concepts are the building blocks for more advanced topics like distortion correction, 3D reconstruction, and multi-camera setups. In the next part of this series, we'll go beyond the basics to explore the practical reasons for camera calibration, the types of distortions you might encounter, and the mathematical and technical approaches to correct them. So, stay tuned for more insights into the fascinating world of camera calibration!</p><h3 id="heading-references">References</h3><p>For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:</p><p>[1] Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman</p><p>[2] <a target="_blank" href="https://web.stanford.edu/class/cs231a/course_notes/01-camera-models.pdf">Stanford CS231A: Camera Models</a></p><h3 id="heading-further-reading">Further Reading</h3><p>For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:</p><ol><li><p><strong>Books:</strong></p><ul><li><p>Digital Image Warping by George Wolberg</p></li><li><p>Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman</p></li><li><p>Computer Vision: Algorithms and Applications by Richard Szeliski</p></li><li><p>3D Computer Vision: Efficient Methods and Applications by Christian Whler</p></li></ul></li><li><p><strong>Papers:</strong></p><ul><li><p>A Four-step Camera Calibration Procedure with Implicit Image Correction by Jean-Yves Bouguet</p></li><li><p>Flexible Camera Calibration By Viewing a Plane From Unknown Orientations by Zhengyou Zhang</p></li></ul></li></ol><p>By exploring these resources, you'll gain a more comprehensive understanding of camera calibration, enabling you to tackle more complex problems and applications.</p>]]><![CDATA[<h3 id="heading-introduction">Introduction</h3><p>Imagine you're taking a photo of a building with your smartphone. You might notice that the lines of the building don't appear as straight as they do in real life, or the proportions seem slightly off. These are distortions, and discrepancies between the real-world objects and their representations in the image. Such distortions often occur due to the inherent limitations of camera lenses and sensors as they attempt to map a 3D world onto a 2D plane.</p><p>Camera calibration is the technique used to understand and correct these distortions. It's a fundamental process for achieving more accurate visual representations, especially in applications like augmented reality, robotics, and 3D reconstruction. In this first part of our series on camera calibration, we'll explore the foundational concepts and models that serve as the backbone of this technique. We'll delve into the intrinsic and extrinsic parameters that influence how a camera captures an image and discuss how these parameters can be determined to correct distortions. By the end of this post, you'll have a solid understanding of the principles behind camera calibration and its importance in various domains.</p><h3 id="heading-camera-models">Camera Models</h3><p>To understand the intricacies of camera imaging, it's useful to connect the dots with real-world applications. Take the example of a self-driving car, which relies on its camera to accurately gauge the dimensions and distances of surrounding elements like pedestrians, other vehicles, and road signs. Just as understanding the human eye's perception aids in comprehending our interaction with the 3D world, grasping the mechanics of a camera model enhances the precision of such measurements in automated systems. To unpack this further, let's engage in a thought experiment: envision a simple setup (See figure 1) where a small barrier with a pinhole is placed between a 3D object and a film. Light rays from the object pass through the pinhole to create an image on the film. This basic mechanism serves as the cornerstone for what is known as the pinhole camera model, a foundational concept that allows us to fine-tune the way cameras, like the one in a self-driving car, interpret the world.</p><h4 id="heading-pinhole-camera-model">Pinhole Camera Model</h4><h5 id="heading-the-setup"><strong>The setup</strong></h5><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162002489/2dc69ea2-1d6f-4ebf-9a41-4705332915d1.png" alt class="image--center mx-auto" /></p><center>Figure 1: The Pinhole Camera model [2]</center><p>In the pinhole model, consider a 3D coordinate system defined by unit vectors <strong><em>i</em></strong>, <strong><em>j</em></strong>, <strong><em>k</em></strong>. Place an object point P with coordinates (<strong><em>X, Y, Z</em></strong>) in this world. The camera's aperture is at the origin <strong>O</strong>, and the image plane (or film) is parallel to the i-j plane at a distance f along the k-axis. The film has a centre <strong><em>C'</em></strong>, and the projection of <strong><em>P</em></strong> onto the film is <strong><em>P'</em></strong> with 2D coordinates (<strong><em>x, y</em></strong>) - (Refer to Figure 1 and 2)</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162197761/f7c4f796-f04f-447d-9d58-d8a717da1edd.png" alt class="image--center mx-auto" /></p><center>Figure 2: A formal construction of the Pinhole Camera model [2]</center><h5 id="heading-the-mathematics"><strong>The Mathematics</strong></h5><p>To relate <strong><em>P</em></strong> and <strong><em>P'</em></strong>, we draw a line from <strong><em>P</em></strong> through the aperture <strong><em>O</em></strong>, intersecting the film at <strong><em>P'</em></strong>. The triangles <strong><em>POP'</em></strong> and <strong><em>OCP'</em></strong> are similar, which gives us:</p><p>$$\frac{x}{f} = \frac{X}{Z} \quad \text{and} \quad \frac{y}{f} = \frac{Y}{Z}$$</p><p>Solving for <strong><em>x</em></strong> and <strong><em>y</em></strong>, we get:</p><p>$$\begin{align*} x &= f \left( \frac{X}{Z} \right), \\ y &= f \left( \frac{Y}{Z} \right). \end{align*}$$</p><p>Here, f represents the focal length of the Camera.</p><h4 id="heading-lens-models">Lens Models</h4><p>While the pinhole model gives us an idealized perspective of image formation, real-world cameras use lenses to focus light. Lenses introduce additional complexities due to their shape, material, and how they bend light rays. These models account for additional factors like focal length, aperture, and lens distortions. Let's explore lens models to understand these intricacies.</p><h5 id="heading-the-setup-1"><strong>The Setup</strong></h5><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162656186/1a69c633-2f3e-47f6-a2d8-0acc5ac0c4c0.png" alt class="image--center mx-auto" /></p><center>Figure 3: The Simple lens model [2]</center><p>Like the pinhole model, lens models use a 3D coordinate system defined by <strong><em>i</em></strong>, <strong><em>j</em></strong>, <strong><em>k</em></strong>. However, instead of a pinhole at <strong><em>O</em></strong>, we have a lens. The image plane is still at a distance f along the <strong><em>k-axis</em></strong>, we denote the centre of this plane as <strong><em>C'</em></strong> (Refer to Figure 3 and 4)</p><h5 id="heading-the-mathematics-1"><strong>The Mathematics</strong></h5><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696162458374/87f2a71a-1d96-493b-9c16-a7cb1659c7ae.png" alt class="image--center mx-auto" /></p><center>Figure 4: The Simple lens model: Relationship between points in the focal plane and the real world (3D)[2]</center><p>In lens models, we need to account for distortions introduced by the lens. These distortions are typically represented by <strong>d<sub>x</sub></strong> and <strong>d<sub>y</sub></strong>, affecting the x and y coordinates, respectively. The equations for x and y in lens models are:</p><p>$$\begin{align*} x &= f \left( \frac{X}{Z} \right) + d_x, \\ y &= f \left( \frac{Y}{Z} \right) + d_y. \end{align*}$$</p><p>In these equations, dx and dy are functions of <strong><em>X</em></strong>, <strong><em>Y</em></strong>, and <strong><em>Z</em></strong> and they represent the distortions introduced by the lens.</p><h3 id="heading-intrinsic-and-extrinsic-parameters">Intrinsic and Extrinsic Parameters</h3><p>So far, we've discussed the basic models that describe how cameras work and how they capture the 3D world onto a 2D plane. These models give us a high-level view but are generalized and often idealized. In practice, each camera has its unique characteristics that influence how it captures an image. These characteristics are captured by what are known as <strong>intrinsic</strong> and <strong>extrinsic</strong> parameters. While intrinsic parameters deal with the camera's own 'personality' or 'DNA' extrinsic parameters describe how the camera is positioned in space. Together, they offer a complete picture of a camera's behaviour, which is crucial for applications like 3D reconstruction, augmented reality, and robotics.</p><h4 id="heading-intrinsic-parameters">Intrinsic Parameters</h4><p>After Understanding the broad overview of intrinsic and extrinsic parameters, let's zoom in on the intrinsic parameters first. These parameters are unique to each camera and provide insights into how it captures images. While these parameters are generally considered constants for a specific camera, it is important to note that they can sometimes change. For instance, in cameras with variable focal lengths or adjustable sensors, intrinsic parameters can vary.</p><ol><li><h4 id="heading-optical-axis">Optical Axis</h4><p> The optical axis is essentially the line along which light travels into the camera to hit the sensor. In the idealized pinhole and lens models, it's the line that passes through the aperture (or lens centre) and intersects the image plane. It serves as a reference line for other measurements and parameters.</p></li><li><p><strong>Focal Length</strong> (<em>f</em> ): This is the distance between the lens and the image sensor. Knowing the focal length is crucial for estimating the distances and sizes of objects in images. It's also a key factor in determining the field of view and is usually represented in pixels.</p></li></ol><p>$$f = \alpha \times \text{sensor size} ,$$</p><p>$$\text{Here}, \alpha \space \text{is a constant that relates the physical sensor size to the size in pixels}$$</p><ol><li><strong>Principal Point</strong> ((<strong>c<sub>x</sub></strong>, <strong>c<sub>y</sub></strong>))<strong>:</strong> This is the point on the image plane where the optical axis intersects, it often lies near the centre of the image. it is crucial for tasks like image alignment and panorama stitching.</li></ol><p>$$\begin{align*} c_x &= \frac{Image Width}{2},\\ \\ c_y &= \frac{Image Height}{2}. \end{align*}$$</p><ol><li><strong>Skew Coefficient</strong> <strong>(s)</strong>: This parameter is responsible for any angle between the x and y pixel axes of the image plane. It is rarely encountered in modern-day cameras.</li></ol><p>$$s = 0 \quad \text{(usually)}$$</p><p>The intrinsic matrix denoted by <strong><em>K</em></strong> consolidates these parameters:</p><p>$$K = \begin{pmatrix} f_x & s & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix}$$</p><h5 id="heading-note-on-constancy"><strong>Note on Constancy</strong></h5><p>Although intrinsic parameters like the focal length and principal point are often treated as constants, especially in fixed or pre-calibrated camera setups, they can often change based on specific hardware configurations. For example, the focal length will vary in cameras with zoom capabilities. Therefore, in such special cases, recalibration may be necessary.</p><h5 id="heading-camera-with-zoom-capabilities"><strong>Camera with Zoom Capabilities</strong></h5><p>Cameras with zoom capabilities introduce an additional layer of complexity to the calibration process. While Zooming allows for better framing or focus on specific areas, it also changes intrinsic parameters like the focal length. This section will explore how to handle calibration in scenarios involving zoom.</p><p><strong><em>Calibration at Specific Zoom Levels</em></strong></p><p>When you calibrate a camera at a particular zoom level, the resulting intrinsic parameters are only accurate for that setting. If you continue to record or capture images at the same zoom level, these calibration parameters will remain valid.</p><p>$$K_{\text{zoom}} = \begin{pmatrix} f_{\text{zoom}} & s & c_x \\ 0 & f_{\text{zoom}} & c_y \\ 0 & 0 & 1 \end{pmatrix}$$</p><p>Here, <strong><em>K<sub>zoom</sub></em></strong> and <strong><em>f<sub>zoom</sub></em></strong> represent the camera matrix and focal length at the specific zoom level, respectively.</p><h4 id="heading-handling-zoom-changes">Handling Zoom Changes</h4><p>If you adjust the zoom after calibration, you have two main options:</p><ul><li><p><strong>Dynamic Calibration</strong>: Recalibrate the camera every time you change the zoom. This approach provides the highest accuracy but may be impractical for real-time applications due to computational costs.</p></li><li><p><strong>.Parameter Interpolation</strong>: If you've calibrated the camera at multiple zoom levels, you can interpolate the intrinsic parameters for new zoom settings. This is computationally efficient but might sacrifice some accuracy.</p></li></ul><p>Understanding intrinsic parameters is key for various computer vision tasks. For instance, in augmented reality, an accurate intrinsic matrix can drastically improve the realism and alignment of virtual objects in real-world scenes.</p><h4 id="heading-extrinsic-parameters">Extrinsic Parameters</h4><p>While intrinsic parameters define a camera's 'personality' by capturing its internal characteristics, extrinsic parameters tell the 'story' of the camera's interaction with the external world. These parameters, specifically the rotation matrix <strong><em>R</em></strong> and the translation vector <strong><em>T</em></strong>, are indispensable for mapping points from the camera's 2D image plane back to their original 3D coordinates in the world. This becomes particularly vital in scenarios involving multiple cameras or moving cameras, such as in robotics or autonomous vehicles. By accurately determining these extrinsic parameters, one can achieve high-precision tasks like 3D reconstruction and multi-camera scene analysis.</p><ol><li><p><strong>Rotation Matrix (<em>R</em>):</strong> This <strong><em>3x3</em></strong> matrix gives us the orientation of the camera in the world coordinate system. Specifically, it transforms coordinates from the world frame to the camera frame. For instance, if a drone equipped with a camera needs to align itself to capture a specific scene, the rotation matrix helps in determining the orientation the drone must assume.</p><p> The rotation matrix is usually denoted as <strong><em>R</em></strong> and takes the form:</p></li></ol><p>$$R = \begin{pmatrix} r_{11} & r_{12} & r_{13} \\ r_{21} & r_{22} & r_{23} \\ r_{31} & r_{32} & r_{33} \end{pmatrix}$$</p><p>The elements <strong>r<sub>11</sub></strong>, <strong>r<sub>12</sub></strong>, and <strong>r<sub>13</sub></strong>, .... <strong>r<sub>33</sub></strong> define the camera's orientation relative to the world's coordinate system. Each column of <strong><em>R</em></strong> essentially represents the unit vectors along the camera's local x, y, and z axes, but expressed in terms of the world coordinate system. For example, <strong>r<sub>11</sub></strong>, <strong>r<sub>21</sub></strong>, and <strong>r<sub>31</sub></strong> describe how much the world's x-component aligns with the camera's local x-axis.</p><ol><li><strong>Translation Vector (T):</strong> This <strong><em>3x1</em></strong> vector represents the position of the camera's optical centre in the world coordinate system. The translation vector is generally represented as:</li></ol><p>$$T = \begin{pmatrix} t_x \\ t_y \\ t_z \end{pmatrix}$$</p><p>The elements <strong>t<sub>x</sub></strong>, <strong>t<sub>y</sub></strong>, and <strong>t<sub>z</sub></strong> in the translation vector represents the position of the camera's optical centre in the world coordinate system. For instance, <strong>t<sub>x</sub></strong> is the distance from the world origin to the camera's optical centre along the world's x-axis, while <strong>t<sub>y</sub></strong> and <strong>t<sub>z</sub></strong> serve the same purpose along the y and z axes, respectively.</p><p>Computing R and T gives you a complete picture of the camera's pose in the world, including both orientation and position.</p><p>Together, the rotation matrix and the translation vector can be combined into a single <strong><em>3x4</em></strong> matrix, often represented as <strong><em>[R|T]</em></strong>:</p><p>$$[R|T] = \begin{pmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \end{pmatrix}$$</p><h3 id="heading-conclusion">Conclusion</h3><p>We've covered a lot of ground in this first instalment of our series on camera calibration, unravelling the complexities behind camera models and the intrinsic and extrinsic parameters that define them. These foundational concepts are the building blocks for more advanced topics like distortion correction, 3D reconstruction, and multi-camera setups. In the next part of this series, we'll go beyond the basics to explore the practical reasons for camera calibration, the types of distortions you might encounter, and the mathematical and technical approaches to correct them. So, stay tuned for more insights into the fascinating world of camera calibration!</p><h3 id="heading-references">References</h3><p>For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:</p><p>[1] Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman</p><p>[2] <a target="_blank" href="https://web.stanford.edu/class/cs231a/course_notes/01-camera-models.pdf">Stanford CS231A: Camera Models</a></p><h3 id="heading-further-reading">Further Reading</h3><p>For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:</p><ol><li><p><strong>Books:</strong></p><ul><li><p>Digital Image Warping by George Wolberg</p></li><li><p>Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman</p></li><li><p>Computer Vision: Algorithms and Applications by Richard Szeliski</p></li><li><p>3D Computer Vision: Efficient Methods and Applications by Christian Whler</p></li></ul></li><li><p><strong>Papers:</strong></p><ul><li><p>A Four-step Camera Calibration Procedure with Implicit Image Correction by Jean-Yves Bouguet</p></li><li><p>Flexible Camera Calibration By Viewing a Plane From Unknown Orientations by Zhengyou Zhang</p></li></ul></li></ol><p>By exploring these resources, you'll gain a more comprehensive understanding of camera calibration, enabling you to tackle more complex problems and applications.</p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1696165838827/5dab2b8a-fa43-4d91-b987-10d2c0159abf.png<![CDATA[Iba: When Words Fail, Music Speaks of the Divine]]>https://samueladebayo.com/iba-when-words-fail-music-speaks-of-the-divinehttps://samueladebayo.com/iba-when-words-fail-music-speaks-of-the-divineThu, 21 Sep 2023 15:48:34 GMT<![CDATA[<p>The concept of divine greatness transcends human understanding. While I am in deep search of knowledge even to the height of academic allure I have come to find solace and awe in contemplating the unfathomable. Even in both scientific research and spiritual contemplation, we should be reminded that there are realms of understanding that go beyond what we can readily grasp. It's a reminder of the limitations of human cognition.</p><p>The song "IBA", by Pastor Nathenial Bassey, Dusin Oyekan, Dasola Akinbule is deeply inspiring and yet again I am brought to the realisation of how glorious you are lord, you are graciously wonderful, there is not a thought on earth nor in heaven that can fathom how great you are. No thought, whether terrestrial or celestial, can adequately encapsulate your grandeur. In the face of this, I find myself humbled, whispering "IBA, atofarati" in reverent submission.</p><p>Musical compositions like "IBA" possess a remarkable ability to crystallize complex emotions and thoughts. They serve both as an articulation of, and a channel for, our ineffable sense of awe and reverence.</p><p>I recognise your unsearchable greatness, Ahayah!</p>]]><![CDATA[<p>The concept of divine greatness transcends human understanding. While I am in deep search of knowledge even to the height of academic allure I have come to find solace and awe in contemplating the unfathomable. Even in both scientific research and spiritual contemplation, we should be reminded that there are realms of understanding that go beyond what we can readily grasp. It's a reminder of the limitations of human cognition.</p><p>The song "IBA", by Pastor Nathenial Bassey, Dusin Oyekan, Dasola Akinbule is deeply inspiring and yet again I am brought to the realisation of how glorious you are lord, you are graciously wonderful, there is not a thought on earth nor in heaven that can fathom how great you are. No thought, whether terrestrial or celestial, can adequately encapsulate your grandeur. In the face of this, I find myself humbled, whispering "IBA, atofarati" in reverent submission.</p><p>Musical compositions like "IBA" possess a remarkable ability to crystallize complex emotions and thoughts. They serve both as an articulation of, and a channel for, our ineffable sense of awe and reverence.</p><p>I recognise your unsearchable greatness, Ahayah!</p>]]><![CDATA[Understanding Principal Component Analysis (PCA): A Comprehensive Guide]]>https://samueladebayo.com/understanding-principal-component-analysis-pca-a-comprehensive-guidehttps://samueladebayo.com/understanding-principal-component-analysis-pca-a-comprehensive-guideSun, 03 Sep 2023 05:42:01 GMT<![CDATA[<h2 id="heading-introduction">Introduction</h2><p>Imagine you're a wine connoisseur with a penchant for data. You've collected a vast dataset that includes variables like acidity, sugar content, and alcohol level for hundreds of wine samples. You're interested in distinguishing wines based on these characteristics, but you soon realize that visualizing and analyzing multi-dimensional data is like trying to taste a wine from a sealed bottlenear impossible.</p><p>This is where the magic of Principal Component Analysis, or PCA for short, kicks in. Think of PCA as your data's personal stylist, helping your dataset shed unnecessary dimensions while keeping its essence intact. Whether you're dissecting the nuances of wine characteristics or diving into the depths of machine learning algorithms, PCA is your go-to for simplifying things without losing the crux of the data.</p><h2 id="heading-a-deep-dive-into-the-mathematics-of-pca">A Deep Dive into the Mathematics of PCA</h2><h3 id="heading-step-1-the-covariance-matrix">Step 1 : The Covariance Matrix</h3><p>Let's assume you given a 2D dataset <strong><em>X</em></strong> of size <strong><em>(n2)</em></strong> (where ( n ) is the number of samples. Each row in X represents a data point in 2D space, with the first column representing the x-coordinates and the second coordinates representing the y-coordinates. The first step in <strong>PCA</strong> is to calculate its covariance matrix <strong><em></em></strong>:</p><p>$$\Sigma = \frac{1}{n} \sum_{i=1}^{n} (x_i - \mu)(x_i - \mu)^T$$</p><p>Here x<sub>ij</sub> represents the i<sup>th </sup> row in <strong><em>X</em></strong> (a 2D point), and <strong><em></em></strong> is the mean vector of the dataset. The term <strong><em>(x<sub>i </sub> - )</em></strong> represents the deviation of each point from the mean, and <strong><em>(x<sub>i </sub> - )<sup>T</sup></em></strong> is its transpose.</p><h3 id="heading-step-2-eigen-decomposition">Step 2: Eigen Decomposition</h3><p>After calculating <strong><em></em></strong> Next, we perform eigen-decomposition of the covariance matrix. This allows for finding its eigenvalues and eigenvectors. The eigen decomposition of <strong><em></em></strong> can be represented as:</p><p>$$\Sigma = Q \Lambda Q^{-1}$$</p><p>Here <strong><em>Q</em></strong> is a matrix where each column is an eigenvector of <strong><em></em></strong> and <strong><em></em></strong> is a diagonal matrix containing the eigenvalues <strong><em><sub>1,</sub> <sub>2, </sub> ..... <sub>d </sub></em></strong> in descending order.</p><h3 id="heading-step-3-principal-components-and-dimensionality-reduction">Step 3: Principal Components and Dimensionality Reduction</h3><p>Let's say you have now found k eigenvectors (principal components) that you would like to use for dimensionality reduction. These <strong><em>k</em></strong> eigenvectors form a <strong><em>2 x k</em></strong> matrix <strong><em>P</em></strong>.</p><p>The projected data <strong><em>Y</em></strong> , in the new <em>k-dimensional</em> space can be calculated as:</p><p>$$Y = X \cdot P$$</p><p>In this equation, <strong><em>X</em></strong> is the original <strong><em>n x 2</em></strong> dataset, and <strong><em>P</em></strong> is the <strong><em>2 x k</em></strong> matrix of principal components. The resulting <strong><em>Y</em></strong> will be of size <strong><em>n x k</em></strong>, effectively reducing the dimensionality of each data point from 2D to <strong><em>k-D</em></strong>.</p><h2 id="heading-implementing-pca-with-python">Implementing PCA with Python</h2><p>Here's a Python code snippet to get you started with PCA:</p><ol><li><p><strong>Data Generation:</strong> First let's generate some synthetic data with 100 samples in a 2D feature space between x and y coordinates. This sort of mimics the real-world data where features are often correlated.</p><pre><code class="lang-python"> <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-comment"># Generate synthetic 2D data</span> np.random.seed(<span class="hljs-number">0</span>) x = np.random.normal(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-number">100</span>) <span class="hljs-comment"># x-coordinates</span> y = <span class="hljs-number">2</span> * x + np.random.normal(<span class="hljs-number">0</span>, <span class="hljs-number">5</span>, <span class="hljs-number">100</span>) <span class="hljs-comment"># y-coordinates</span> data = np.column_stack((x, y))</code></pre></li><li><p><strong>Data Visualisation:</strong> Let's visualise what our generated data looks like.</p></li></ol><pre><code class="lang-python"><span class="hljs-comment"># Plot the synthetic data</span>plt.figure(figsize=(<span class="hljs-number">8</span>, <span class="hljs-number">6</span>))plt.scatter(data[:, <span class="hljs-number">0</span>], data[:, <span class="hljs-number">1</span>], label=<span class="hljs-string">'Original Data'</span>)plt.xlabel(<span class="hljs-string">'X'</span>)plt.ylabel(<span class="hljs-string">'Y'</span>)plt.title(<span class="hljs-string">'Synthetic 2D Data'</span>)plt.grid(<span class="hljs-literal">True</span>)plt.legend()plt.show()</code></pre><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693714113910/c9331999-7d99-4a0d-ab7d-64c574fc129f.png" alt class="image--center mx-auto" /></p><ol><li><strong>Data Centering:</strong> Before you can apply PCA, it is essential to center the data around the origin. This ensures that the first principal component describes the direction for maximum variance.</li></ol><pre><code class="lang-python"><span class="hljs-comment"># Calculate the mean of the data</span>mean_data = np.mean(data, axis=<span class="hljs-number">0</span>)<span class="hljs-comment"># Center the data by subtracting the mean</span>centered_data = data - mean_data</code></pre><ol><li><strong>The covariance Matrix calculation:</strong> The covariance matrix captures the internal structure of the data. It is the basis for identifying the principal components.</li></ol><pre><code class="lang-python"><span class="hljs-comment"># Calculate the covariance matrix</span>cov_matrix = np.cov(centered_data, rowvar=<span class="hljs-literal">False</span>)print(<span class="hljs-string">'Covariance of Data'</span>,cov_matrix)</code></pre><p>$$\text{Covariance Matrix} = \begin{pmatrix} 102.61 & 211.10 \\ 211.10 & 461.01 \end{pmatrix}$$</p><p>Code output:</p><pre><code class="lang-python">Covariance of Data [[<span class="hljs-number">102.60874942</span> <span class="hljs-number">211.10203024</span>] [<span class="hljs-number">211.10203024</span> <span class="hljs-number">461.00685553</span>]]</code></pre><ol><li><strong>Eigen Decomposition:</strong> Here we calculate the eigenvalues and eigenvectors of the covariance matrix. The eigenvectors point in the direction of maximum variance, and the eigenvalues indicate the magnitude of this variance - since the first principal component is the eigenvector associated with the largest eigenvalue of the data's covariance matrix. This eigenvector identifies the direction along which the dataset varies the most.</li></ol><pre><code class="lang-python"><span class="hljs-comment"># Calculate the eigenvalues and eigenvectors of the covariance matrix</span>eig_values, eig_vectors = np.linalg.eig(cov_matrix)print(<span class="hljs-string">'Eigenvalues:'</span>, eig_values, <span class="hljs-string">'\n'</span>, <span class="hljs-string">'Eigenvectors: '</span>, eig_vectors)</code></pre><p>$$\text{Eigenvalues} = \left[ 4.90, 558.71 \right]$$</p><p>$$\text{Eigenvectors} = \begin{pmatrix} -0.91 & -0.42 \\ 0.42 & -0.91 \end{pmatrix}$$</p><ol><li><strong>Projection and Visualization:</strong> Our data is then projected onto the principal component. The original data with the principal component, and the projected data are then plotted together to further emphasize the dimensionality reduction.</li></ol><pre><code class="lang-python"><span class="hljs-comment"># Choose the eigenvector corresponding to the largest eigenvalue (Principal Component)</span>principal_component = eig_vectors[:, np.argmax(eig_values)]<span class="hljs-comment"># Project data onto the principal component</span>projected_data = np.dot(centered_data, principal_component)<span class="hljs-comment"># Re-plot the original data and its projection with the principal component as a red arrow</span><span class="hljs-comment"># Plot the original data and its projection</span>plt.figure(figsize=(<span class="hljs-number">10</span>, <span class="hljs-number">8</span>))plt.scatter(data[:, <span class="hljs-number">0</span>], data[:, <span class="hljs-number">1</span>], alpha=<span class="hljs-number">0.5</span>, label=<span class="hljs-string">'Original Data'</span>)<span class="hljs-comment"># Draw the principal component as a red arrow</span>plt.arrow(mean_data[<span class="hljs-number">0</span>], mean_data[<span class="hljs-number">1</span>], principal_component[<span class="hljs-number">0</span>]*<span class="hljs-number">20</span>, principal_component[<span class="hljs-number">1</span>]*<span class="hljs-number">20</span>, head_width=<span class="hljs-number">2</span>, head_length=<span class="hljs-number">2</span>, fc=<span class="hljs-string">'r'</span>, ec=<span class="hljs-string">'r'</span>, label=<span class="hljs-string">'Principal Component'</span>)<span class="hljs-comment"># Plot the projected data as green points</span>plt.scatter(mean_data[<span class="hljs-number">0</span>] + projected_data * principal_component[<span class="hljs-number">0</span>], mean_data[<span class="hljs-number">1</span>] + projected_data * principal_component[<span class="hljs-number">1</span>], alpha=<span class="hljs-number">0.5</span>, color=<span class="hljs-string">'g'</span>, label=<span class="hljs-string">'Projected Data'</span>)plt.xlabel(<span class="hljs-string">'X'</span>)plt.ylabel(<span class="hljs-string">'Y'</span>)plt.title(<span class="hljs-string">'Data and Principal Component'</span>)plt.grid(<span class="hljs-literal">True</span>)plt.legend()plt.show()</code></pre><p>Output:</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693715195688/d60a373a-f67c-49da-a0bf-8c094994a4d9.png" alt class="image--center mx-auto" /></p><p>There we gothe red arrow representing the principal component is now visible in the plot, along with the original data points and their projections (in green). The arrow points in the direction of the highest variance in the dataset, capturing the essence of the data in fewer dimensions.</p><h4 id="heading-why-does-the-pca-point-in-downwards-left">Why does the PCA point in Downwards left?</h4><p>You might have noticed that the red arrow, our principal component, points towards the bottom left. IS this supposed to happen? Absolutely, and here is why:</p><p>The direction of the principal component is calculated mathematically to capture the maximum variance in the synthetic dataset. This direction is defined by the eigenvector corresponding to the largest eigenvalue of the covariance.</p><p>Simply put, the principal components serve as a "line of best fit" for the multidimensional data It doesn't necessarily mean an alignment with the <code>x</code> and <code>y</code> axis but it captures the correlation between these dimensions. In this specific synthetic dataset, the principal component points towards the bottom left, indicating that as one variable decreases, the other tends to decrease as well, and vice-versa.</p><p>This is a crucial insight because it tells us not just about the spread of each variable but also about their relationship with each other. So, yes, the direction of the principal component is both intentional and informative.</p><p>In case you would like to run the full code use the replit window below:</p><div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://replit.com/@SamuelAdebayo4/Vanilla-PCA#main.py">https://replit.com/@SamuelAdebayo4/Vanilla-PCA#main.py</a></div><p> </p><h2 id="heading-real-world-applications">Real-world Applications</h2><h3 id="heading-in-the-glass-wine-quality-estimation">In the Glass: Wine Quality Estimation</h3><p>Let's circle back to our wine example. You could use PCA to distinguish wines based on key characteristics. By reducing the dimensions, you can visualize clusters of similar wines and maybe even discover the perfect bottle for your next dinner party!</p><h3 id="heading-beyond-the-bottle-other-fields">Beyond the Bottle: Other Fields</h3><ol><li><p><strong>Data Visualization</strong>: High-dimensional biological data, stock market trends, etc.</p></li><li><p><strong>Noise Reduction</strong>: Image processing and audio signal processing.</p></li><li><p><strong>Natural Language Processing</strong>: Feature extraction from text data.</p></li></ol><h2 id="heading-future-directions">Future Directions</h2><ol><li><p><strong>Kernel PCA</strong>: For when linear PCA isn't enough.</p></li><li><p><strong>Sparse PCA</strong>: When you need a sparse representation.</p></li><li><p><strong>Integrating with Deep Learning</strong>: Using PCA for better initialization of neural networks.</p></li></ol><h2 id="heading-further-reading">Further Reading</h2><p>For those who wish to delve deeper into PCA, here are some textbook references:</p><ol><li><p>"Pattern Recognition and Machine Learning" by Christopher M. Bishop</p></li><li><p>"The Elements of Statistical Learning" by Trevor Hastie, Robert Tibshirani, and Jerome Friedman</p></li><li><p>"Machine Learning: A Probabilistic Perspective" by Kevin P. Murphy</p></li></ol><h2 id="heading-conclusion">Conclusion</h2><p>In the realm of data science, PCA ages like a well-kept Bordeauxit only gets richer and more valuable as you delve deeper. This versatile approach is more than just a mathematical trick; it's a lens that brings clarity to your analytical endeavors. So whether you're a wine lover seeking the perfect blend, a data scientist sifting through gigabytes, or a machine learning guru, mastering PCA is like adding a Swiss Army knife to your data analysis toolkit.</p>]]><![CDATA[<h2 id="heading-introduction">Introduction</h2><p>Imagine you're a wine connoisseur with a penchant for data. You've collected a vast dataset that includes variables like acidity, sugar content, and alcohol level for hundreds of wine samples. You're interested in distinguishing wines based on these characteristics, but you soon realize that visualizing and analyzing multi-dimensional data is like trying to taste a wine from a sealed bottlenear impossible.</p><p>This is where the magic of Principal Component Analysis, or PCA for short, kicks in. Think of PCA as your data's personal stylist, helping your dataset shed unnecessary dimensions while keeping its essence intact. Whether you're dissecting the nuances of wine characteristics or diving into the depths of machine learning algorithms, PCA is your go-to for simplifying things without losing the crux of the data.</p><h2 id="heading-a-deep-dive-into-the-mathematics-of-pca">A Deep Dive into the Mathematics of PCA</h2><h3 id="heading-step-1-the-covariance-matrix">Step 1 : The Covariance Matrix</h3><p>Let's assume you given a 2D dataset <strong><em>X</em></strong> of size <strong><em>(n2)</em></strong> (where ( n ) is the number of samples. Each row in X represents a data point in 2D space, with the first column representing the x-coordinates and the second coordinates representing the y-coordinates. The first step in <strong>PCA</strong> is to calculate its covariance matrix <strong><em></em></strong>:</p><p>$$\Sigma = \frac{1}{n} \sum_{i=1}^{n} (x_i - \mu)(x_i - \mu)^T$$</p><p>Here x<sub>ij</sub> represents the i<sup>th </sup> row in <strong><em>X</em></strong> (a 2D point), and <strong><em></em></strong> is the mean vector of the dataset. The term <strong><em>(x<sub>i </sub> - )</em></strong> represents the deviation of each point from the mean, and <strong><em>(x<sub>i </sub> - )<sup>T</sup></em></strong> is its transpose.</p><h3 id="heading-step-2-eigen-decomposition">Step 2: Eigen Decomposition</h3><p>After calculating <strong><em></em></strong> Next, we perform eigen-decomposition of the covariance matrix. This allows for finding its eigenvalues and eigenvectors. The eigen decomposition of <strong><em></em></strong> can be represented as:</p><p>$$\Sigma = Q \Lambda Q^{-1}$$</p><p>Here <strong><em>Q</em></strong> is a matrix where each column is an eigenvector of <strong><em></em></strong> and <strong><em></em></strong> is a diagonal matrix containing the eigenvalues <strong><em><sub>1,</sub> <sub>2, </sub> ..... <sub>d </sub></em></strong> in descending order.</p><h3 id="heading-step-3-principal-components-and-dimensionality-reduction">Step 3: Principal Components and Dimensionality Reduction</h3><p>Let's say you have now found k eigenvectors (principal components) that you would like to use for dimensionality reduction. These <strong><em>k</em></strong> eigenvectors form a <strong><em>2 x k</em></strong> matrix <strong><em>P</em></strong>.</p><p>The projected data <strong><em>Y</em></strong> , in the new <em>k-dimensional</em> space can be calculated as:</p><p>$$Y = X \cdot P$$</p><p>In this equation, <strong><em>X</em></strong> is the original <strong><em>n x 2</em></strong> dataset, and <strong><em>P</em></strong> is the <strong><em>2 x k</em></strong> matrix of principal components. The resulting <strong><em>Y</em></strong> will be of size <strong><em>n x k</em></strong>, effectively reducing the dimensionality of each data point from 2D to <strong><em>k-D</em></strong>.</p><h2 id="heading-implementing-pca-with-python">Implementing PCA with Python</h2><p>Here's a Python code snippet to get you started with PCA:</p><ol><li><p><strong>Data Generation:</strong> First let's generate some synthetic data with 100 samples in a 2D feature space between x and y coordinates. This sort of mimics the real-world data where features are often correlated.</p><pre><code class="lang-python"> <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-comment"># Generate synthetic 2D data</span> np.random.seed(<span class="hljs-number">0</span>) x = np.random.normal(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-number">100</span>) <span class="hljs-comment"># x-coordinates</span> y = <span class="hljs-number">2</span> * x + np.random.normal(<span class="hljs-number">0</span>, <span class="hljs-number">5</span>, <span class="hljs-number">100</span>) <span class="hljs-comment"># y-coordinates</span> data = np.column_stack((x, y))</code></pre></li><li><p><strong>Data Visualisation:</strong> Let's visualise what our generated data looks like.</p></li></ol><pre><code class="lang-python"><span class="hljs-comment"># Plot the synthetic data</span>plt.figure(figsize=(<span class="hljs-number">8</span>, <span class="hljs-number">6</span>))plt.scatter(data[:, <span class="hljs-number">0</span>], data[:, <span class="hljs-number">1</span>], label=<span class="hljs-string">'Original Data'</span>)plt.xlabel(<span class="hljs-string">'X'</span>)plt.ylabel(<span class="hljs-string">'Y'</span>)plt.title(<span class="hljs-string">'Synthetic 2D Data'</span>)plt.grid(<span class="hljs-literal">True</span>)plt.legend()plt.show()</code></pre><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693714113910/c9331999-7d99-4a0d-ab7d-64c574fc129f.png" alt class="image--center mx-auto" /></p><ol><li><strong>Data Centering:</strong> Before you can apply PCA, it is essential to center the data around the origin. This ensures that the first principal component describes the direction for maximum variance.</li></ol><pre><code class="lang-python"><span class="hljs-comment"># Calculate the mean of the data</span>mean_data = np.mean(data, axis=<span class="hljs-number">0</span>)<span class="hljs-comment"># Center the data by subtracting the mean</span>centered_data = data - mean_data</code></pre><ol><li><strong>The covariance Matrix calculation:</strong> The covariance matrix captures the internal structure of the data. It is the basis for identifying the principal components.</li></ol><pre><code class="lang-python"><span class="hljs-comment"># Calculate the covariance matrix</span>cov_matrix = np.cov(centered_data, rowvar=<span class="hljs-literal">False</span>)print(<span class="hljs-string">'Covariance of Data'</span>,cov_matrix)</code></pre><p>$$\text{Covariance Matrix} = \begin{pmatrix} 102.61 & 211.10 \\ 211.10 & 461.01 \end{pmatrix}$$</p><p>Code output:</p><pre><code class="lang-python">Covariance of Data [[<span class="hljs-number">102.60874942</span> <span class="hljs-number">211.10203024</span>] [<span class="hljs-number">211.10203024</span> <span class="hljs-number">461.00685553</span>]]</code></pre><ol><li><strong>Eigen Decomposition:</strong> Here we calculate the eigenvalues and eigenvectors of the covariance matrix. The eigenvectors point in the direction of maximum variance, and the eigenvalues indicate the magnitude of this variance - since the first principal component is the eigenvector associated with the largest eigenvalue of the data's covariance matrix. This eigenvector identifies the direction along which the dataset varies the most.</li></ol><pre><code class="lang-python"><span class="hljs-comment"># Calculate the eigenvalues and eigenvectors of the covariance matrix</span>eig_values, eig_vectors = np.linalg.eig(cov_matrix)print(<span class="hljs-string">'Eigenvalues:'</span>, eig_values, <span class="hljs-string">'\n'</span>, <span class="hljs-string">'Eigenvectors: '</span>, eig_vectors)</code></pre><p>$$\text{Eigenvalues} = \left[ 4.90, 558.71 \right]$$</p><p>$$\text{Eigenvectors} = \begin{pmatrix} -0.91 & -0.42 \\ 0.42 & -0.91 \end{pmatrix}$$</p><ol><li><strong>Projection and Visualization:</strong> Our data is then projected onto the principal component. The original data with the principal component, and the projected data are then plotted together to further emphasize the dimensionality reduction.</li></ol><pre><code class="lang-python"><span class="hljs-comment"># Choose the eigenvector corresponding to the largest eigenvalue (Principal Component)</span>principal_component = eig_vectors[:, np.argmax(eig_values)]<span class="hljs-comment"># Project data onto the principal component</span>projected_data = np.dot(centered_data, principal_component)<span class="hljs-comment"># Re-plot the original data and its projection with the principal component as a red arrow</span><span class="hljs-comment"># Plot the original data and its projection</span>plt.figure(figsize=(<span class="hljs-number">10</span>, <span class="hljs-number">8</span>))plt.scatter(data[:, <span class="hljs-number">0</span>], data[:, <span class="hljs-number">1</span>], alpha=<span class="hljs-number">0.5</span>, label=<span class="hljs-string">'Original Data'</span>)<span class="hljs-comment"># Draw the principal component as a red arrow</span>plt.arrow(mean_data[<span class="hljs-number">0</span>], mean_data[<span class="hljs-number">1</span>], principal_component[<span class="hljs-number">0</span>]*<span class="hljs-number">20</span>, principal_component[<span class="hljs-number">1</span>]*<span class="hljs-number">20</span>, head_width=<span class="hljs-number">2</span>, head_length=<span class="hljs-number">2</span>, fc=<span class="hljs-string">'r'</span>, ec=<span class="hljs-string">'r'</span>, label=<span class="hljs-string">'Principal Component'</span>)<span class="hljs-comment"># Plot the projected data as green points</span>plt.scatter(mean_data[<span class="hljs-number">0</span>] + projected_data * principal_component[<span class="hljs-number">0</span>], mean_data[<span class="hljs-number">1</span>] + projected_data * principal_component[<span class="hljs-number">1</span>], alpha=<span class="hljs-number">0.5</span>, color=<span class="hljs-string">'g'</span>, label=<span class="hljs-string">'Projected Data'</span>)plt.xlabel(<span class="hljs-string">'X'</span>)plt.ylabel(<span class="hljs-string">'Y'</span>)plt.title(<span class="hljs-string">'Data and Principal Component'</span>)plt.grid(<span class="hljs-literal">True</span>)plt.legend()plt.show()</code></pre><p>Output:</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693715195688/d60a373a-f67c-49da-a0bf-8c094994a4d9.png" alt class="image--center mx-auto" /></p><p>There we gothe red arrow representing the principal component is now visible in the plot, along with the original data points and their projections (in green). The arrow points in the direction of the highest variance in the dataset, capturing the essence of the data in fewer dimensions.</p><h4 id="heading-why-does-the-pca-point-in-downwards-left">Why does the PCA point in Downwards left?</h4><p>You might have noticed that the red arrow, our principal component, points towards the bottom left. IS this supposed to happen? Absolutely, and here is why:</p><p>The direction of the principal component is calculated mathematically to capture the maximum variance in the synthetic dataset. This direction is defined by the eigenvector corresponding to the largest eigenvalue of the covariance.</p><p>Simply put, the principal components serve as a "line of best fit" for the multidimensional data It doesn't necessarily mean an alignment with the <code>x</code> and <code>y</code> axis but it captures the correlation between these dimensions. In this specific synthetic dataset, the principal component points towards the bottom left, indicating that as one variable decreases, the other tends to decrease as well, and vice-versa.</p><p>This is a crucial insight because it tells us not just about the spread of each variable but also about their relationship with each other. So, yes, the direction of the principal component is both intentional and informative.</p><p>In case you would like to run the full code use the replit window below:</p><div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://replit.com/@SamuelAdebayo4/Vanilla-PCA#main.py">https://replit.com/@SamuelAdebayo4/Vanilla-PCA#main.py</a></div><p> </p><h2 id="heading-real-world-applications">Real-world Applications</h2><h3 id="heading-in-the-glass-wine-quality-estimation">In the Glass: Wine Quality Estimation</h3><p>Let's circle back to our wine example. You could use PCA to distinguish wines based on key characteristics. By reducing the dimensions, you can visualize clusters of similar wines and maybe even discover the perfect bottle for your next dinner party!</p><h3 id="heading-beyond-the-bottle-other-fields">Beyond the Bottle: Other Fields</h3><ol><li><p><strong>Data Visualization</strong>: High-dimensional biological data, stock market trends, etc.</p></li><li><p><strong>Noise Reduction</strong>: Image processing and audio signal processing.</p></li><li><p><strong>Natural Language Processing</strong>: Feature extraction from text data.</p></li></ol><h2 id="heading-future-directions">Future Directions</h2><ol><li><p><strong>Kernel PCA</strong>: For when linear PCA isn't enough.</p></li><li><p><strong>Sparse PCA</strong>: When you need a sparse representation.</p></li><li><p><strong>Integrating with Deep Learning</strong>: Using PCA for better initialization of neural networks.</p></li></ol><h2 id="heading-further-reading">Further Reading</h2><p>For those who wish to delve deeper into PCA, here are some textbook references:</p><ol><li><p>"Pattern Recognition and Machine Learning" by Christopher M. Bishop</p></li><li><p>"The Elements of Statistical Learning" by Trevor Hastie, Robert Tibshirani, and Jerome Friedman</p></li><li><p>"Machine Learning: A Probabilistic Perspective" by Kevin P. Murphy</p></li></ol><h2 id="heading-conclusion">Conclusion</h2><p>In the realm of data science, PCA ages like a well-kept Bordeauxit only gets richer and more valuable as you delve deeper. This versatile approach is more than just a mathematical trick; it's a lens that brings clarity to your analytical endeavors. So whether you're a wine lover seeking the perfect blend, a data scientist sifting through gigabytes, or a machine learning guru, mastering PCA is like adding a Swiss Army knife to your data analysis toolkit.</p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1693719713494/5441a385-e68f-4ea1-a21e-c1852a82f4d3.jpeg<![CDATA[August 22nd]]>https://samueladebayo.com/august-22ndhttps://samueladebayo.com/august-22ndWed, 23 Aug 2023 14:24:14 GMT<![CDATA[<p>"What is man, that thou art mindful of him? and the son of man, that thou visitest him? For thou hast made him a little lower than the angels, and hast crowned him with glory and honour. Thou madest him to have dominion over the works of thy hands; thou hast put all things under his feet: All sheep and oxen, yea, and the beasts of the field; The fowl of the air, and the fish of the sea, and whatsoever passeth through the paths of the seas. O Lord our Lord, how excellent is thy name in all the earth!" (Psalm 8:4-9)</p><p>This verse of the scripture reminds me of the inherent grace and beauty found in our relationship with the Creator. The imagery of being crowned amidst majestic mountains and vast seas brings forth feelings of awe, wonder, and profound gratitude. We are a unique creation, positioned just a little lower than the heavenly beings, yet bestowed with honour and glory. I'm thankful for this reminder of our divine connection, where the natural world stands as a testament to the Creator's grand design.</p><p>The mountains and seas serve not merely as a scenic backdrop but as a profound metaphor for our existence, filled with purpose and meaning. They inspire a sense of humility, yet simultaneously elevate our understanding of our special place in this magnificent creation. I am deeply grateful for the realization that I am part of this intricate and purposeful design, created with intention and love. It's a thought that fills me with thankfulness and inspires me to live a life that reflects this connection. How majestic indeed is His name in all the earth, and how profound is our connection to it all!</p><p>Thank you Ahayah!</p>]]><![CDATA[<p>"What is man, that thou art mindful of him? and the son of man, that thou visitest him? For thou hast made him a little lower than the angels, and hast crowned him with glory and honour. Thou madest him to have dominion over the works of thy hands; thou hast put all things under his feet: All sheep and oxen, yea, and the beasts of the field; The fowl of the air, and the fish of the sea, and whatsoever passeth through the paths of the seas. O Lord our Lord, how excellent is thy name in all the earth!" (Psalm 8:4-9)</p><p>This verse of the scripture reminds me of the inherent grace and beauty found in our relationship with the Creator. The imagery of being crowned amidst majestic mountains and vast seas brings forth feelings of awe, wonder, and profound gratitude. We are a unique creation, positioned just a little lower than the heavenly beings, yet bestowed with honour and glory. I'm thankful for this reminder of our divine connection, where the natural world stands as a testament to the Creator's grand design.</p><p>The mountains and seas serve not merely as a scenic backdrop but as a profound metaphor for our existence, filled with purpose and meaning. They inspire a sense of humility, yet simultaneously elevate our understanding of our special place in this magnificent creation. I am deeply grateful for the realization that I am part of this intricate and purposeful design, created with intention and love. It's a thought that fills me with thankfulness and inspires me to live a life that reflects this connection. How majestic indeed is His name in all the earth, and how profound is our connection to it all!</p><p>Thank you Ahayah!</p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1692800515061/53c1b234-f349-4933-82f5-e5cd4383f9c7.jpeg<![CDATA[The Egalitarian Conundrum: A Meritocratic Journey Amid Equality, Equity, and Sardonic Revelations]]>https://samueladebayo.com/the-egalitarian-conundrum-a-meritocratic-journey-amid-equality-equity-and-sardonic-revelationshttps://samueladebayo.com/the-egalitarian-conundrum-a-meritocratic-journey-amid-equality-equity-and-sardonic-revelationsMon, 12 Jun 2023 00:29:49 GMT<![CDATA[<p>Let's embark on an extended journey into the maze of my personal narrative that ties closely with our earlier philosophical debate, as discussed in '<a target="_blank" href="https://samueladebayo.com/about-equality-and-equity">About Equality and Equity</a>'. Today, I endeavour to weave an intricate tapestry that seamlessly merges personal experience, philosophical thought, and a subtle sprinkle of sarcasm. Together, we'll explore the dynamic interplay of 'Equality of Opportunity,' not 'Equality of Outcomes,' within the context of a merit-based society.</p><p>The canvas of my early years was set in the culturally diverse landscape of Nigeria. The socio-economic environment of my childhood was marked by austerity and a sense of frugality. My parents, resolute in their vision for their child, were fervent believers in the transformative power of education. They nurtured within me a dream that outstretched the limited purview of our financial means. Years of relentless sacrifice and unwavering determination led my parents to provide me with an invaluable opportunity - a quality education. This served as an egalitarian launchpad where I, along with my peers from various economic strata, could test our mettle. Our schools, the magnificent fortresses of knowledge, emerged as arenas where economic disparity was beautifully blurred into oblivion. We all found ourselves on the same starting line, gearing up for the race of life.</p><p>Now, for the biting twist of irony. While we were all equipped with the same opportunity, the outcome was a completely different story. Consider this: We are all given a violin and a piece of music. While the violin and music are the same, the symphony that each individual produces varies drastically. Some, with practice, could create a melody that moves hearts; others might only manage a cacophony. What a splendid testimony to the sardonic wit of life's realities!</p><p>With this crystal-clear reality, I found myself standing at life's crossroads. I had two options to channel my energy into unyielding hard work and diligence, crafting a narrative of success, or to let my circumstances dictate my future, whiling away my life on the sidelines. The melody of meritocracy resonated with me. It echoed the profound truth that the world values individuals not by their ancestral wealth but by the strength of their efforts.</p><p>At this juncture, allow me to invite sarcasm back onto our stage. Picture a world where, regardless of effort, skill, or prowess, everyone reaches the finish line simultaneously. Consider an academic setting where the diligent scholar and the habitual procrastinator are both rewarded with the same grade. They call it equality; I call it a comedic tragedy!</p><p>From a little child to a person carving out their destiny, my narrative was an intense adventure. Given an extraordinary opportunity, I could have taken any path. But I chose the road less travelled. I decided to rise above my circumstances and use my opportunity to craft a trajectory of success. This narrative was never about equal outcomes, but about a race where the winner wasn't preordained. The medals weren't bestowed freely; they were meticulously earned, each gleaming symbol a testament to the sweat of hard work and the unfaltering spirit of meritocracy.</p><p>As my journey progressed, I found myself traversing a path littered with challenges and obstacles. Each hurdle, however, was an opportunity in disguise, a chance to prove my worth, test my resolve, and learn valuable lessons. Hours transformed into days, and days into years as I relentlessly pursued excellence, often at the expense of social gatherings and leisure. The culmination of these years of toil and perseverance resulted in a journey defined by meritocratic success.</p><p>Taking a step back, the larger narrative unveils itself, posing a series of philosophical questions. What is the true essence of equality in our society? Is it merely about presenting equal opportunities, or does it extend to ensuring equal outcomes? In our quest for equality, where do we demarcate the boundary between rewarding merit and fostering mediocrity?</p><p>To answer these questions, we revisit the essence of 'About Equality and Equity.' The narrative of my life echoes the sentiment that creating equal opportunities forms the cornerstone of a just society. The outcomes, however, shouldn't be identical trophies, but a reflection of our individual efforts, our steadfast determination, and our merits.</p><p>As we conclude this philosophical exploration into the realms of equality, equity, and meritocracy, let's cherish the ironic humor life unfurls before us. Life, in all its sardonic wisdom, offers each of us the opportunity to run our unique race. Amid this grand orchestration of humanity, let's value the distinctiveness of each journey and the varying pathways to success. After all, a world where everyone ends up the same would be dreadfully monotonous, don't you agree?</p>]]><![CDATA[<p>Let's embark on an extended journey into the maze of my personal narrative that ties closely with our earlier philosophical debate, as discussed in '<a target="_blank" href="https://samueladebayo.com/about-equality-and-equity">About Equality and Equity</a>'. Today, I endeavour to weave an intricate tapestry that seamlessly merges personal experience, philosophical thought, and a subtle sprinkle of sarcasm. Together, we'll explore the dynamic interplay of 'Equality of Opportunity,' not 'Equality of Outcomes,' within the context of a merit-based society.</p><p>The canvas of my early years was set in the culturally diverse landscape of Nigeria. The socio-economic environment of my childhood was marked by austerity and a sense of frugality. My parents, resolute in their vision for their child, were fervent believers in the transformative power of education. They nurtured within me a dream that outstretched the limited purview of our financial means. Years of relentless sacrifice and unwavering determination led my parents to provide me with an invaluable opportunity - a quality education. This served as an egalitarian launchpad where I, along with my peers from various economic strata, could test our mettle. Our schools, the magnificent fortresses of knowledge, emerged as arenas where economic disparity was beautifully blurred into oblivion. We all found ourselves on the same starting line, gearing up for the race of life.</p><p>Now, for the biting twist of irony. While we were all equipped with the same opportunity, the outcome was a completely different story. Consider this: We are all given a violin and a piece of music. While the violin and music are the same, the symphony that each individual produces varies drastically. Some, with practice, could create a melody that moves hearts; others might only manage a cacophony. What a splendid testimony to the sardonic wit of life's realities!</p><p>With this crystal-clear reality, I found myself standing at life's crossroads. I had two options to channel my energy into unyielding hard work and diligence, crafting a narrative of success, or to let my circumstances dictate my future, whiling away my life on the sidelines. The melody of meritocracy resonated with me. It echoed the profound truth that the world values individuals not by their ancestral wealth but by the strength of their efforts.</p><p>At this juncture, allow me to invite sarcasm back onto our stage. Picture a world where, regardless of effort, skill, or prowess, everyone reaches the finish line simultaneously. Consider an academic setting where the diligent scholar and the habitual procrastinator are both rewarded with the same grade. They call it equality; I call it a comedic tragedy!</p><p>From a little child to a person carving out their destiny, my narrative was an intense adventure. Given an extraordinary opportunity, I could have taken any path. But I chose the road less travelled. I decided to rise above my circumstances and use my opportunity to craft a trajectory of success. This narrative was never about equal outcomes, but about a race where the winner wasn't preordained. The medals weren't bestowed freely; they were meticulously earned, each gleaming symbol a testament to the sweat of hard work and the unfaltering spirit of meritocracy.</p><p>As my journey progressed, I found myself traversing a path littered with challenges and obstacles. Each hurdle, however, was an opportunity in disguise, a chance to prove my worth, test my resolve, and learn valuable lessons. Hours transformed into days, and days into years as I relentlessly pursued excellence, often at the expense of social gatherings and leisure. The culmination of these years of toil and perseverance resulted in a journey defined by meritocratic success.</p><p>Taking a step back, the larger narrative unveils itself, posing a series of philosophical questions. What is the true essence of equality in our society? Is it merely about presenting equal opportunities, or does it extend to ensuring equal outcomes? In our quest for equality, where do we demarcate the boundary between rewarding merit and fostering mediocrity?</p><p>To answer these questions, we revisit the essence of 'About Equality and Equity.' The narrative of my life echoes the sentiment that creating equal opportunities forms the cornerstone of a just society. The outcomes, however, shouldn't be identical trophies, but a reflection of our individual efforts, our steadfast determination, and our merits.</p><p>As we conclude this philosophical exploration into the realms of equality, equity, and meritocracy, let's cherish the ironic humor life unfurls before us. Life, in all its sardonic wisdom, offers each of us the opportunity to run our unique race. Amid this grand orchestration of humanity, let's value the distinctiveness of each journey and the varying pathways to success. After all, a world where everyone ends up the same would be dreadfully monotonous, don't you agree?</p>]]><![CDATA[Day 2 [Blind 75][LeetCode] Maximizing Profit from Buying and Selling Stocks]]>https://samueladebayo.com/blind-75-day2-maximizing-profit-from-buying-and-selling-stockshttps://samueladebayo.com/blind-75-day2-maximizing-profit-from-buying-and-selling-stocksThu, 20 Apr 2023 21:39:06 GMT<![CDATA[<h1 id="heading-introduction">Introduction</h1><p>Welcome to Day 2 of the Blind 75 Challenge! Today I will be tackling the problem of finding the maximum profit by buying and selling stock once, a common problem in algorithm interviews and coding competitions. In this blogpost, I will explore a simple and efficient algorithm to solve this problem in Python, using only one pass/iteration through the array of stock prices.</p><h1 id="heading-problem">Problem</h1><p>You are given an array <code>prices</code> where <code>prices[i]</code> is the price of a given stock on the <code>ith</code> day.You want to maximize your profit by choosing a <em>single day</em> to buy one stock and choosing a <em>different day in the future</em> to sell that stock.Return the maximum profit you can achieve from this transaction. If you cannot achieve any profit, return 0.</p><h1 id="heading-problem-definition-and-explanation">Problem Definition and Explanation</h1><p>In this question, we are given an array of stock prices, where each element in the array represents the price in a particular day, and the <code>ith</code> index location of a day of the stock price corresponds the the <code>ith</code> day. We are expected to find a solution that will provide the maximum profit. The maximum profit here is defined as the largest difference between the largest positive number between the selling price and the buying price. As it is we would want to maximize the profit by buying the stock on one day and selling it on a different day in the future. </p><p>For example given the array below</p><pre><code class="lang-python">[<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">7</span>, <span class="hljs-number">4</span>, <span class="hljs-number">3</span>]</code></pre><p>The maximum profit would be <code>6</code>. Since the minimum price in the array is <code>1</code> and the maximum price is <code>7</code> (which comes at a later day).Again, the task is to find the solution that gets the maximum profit from the array.</p><h1 id="heading-intuition-behind-the-solution">Intuition behind the solution</h1><p><strong>Naive solution</strong>One possible approach for finding the maximum profit by buying and selling stock is to first find the minimum and maximum values in the array and then calculate the difference between them. This can be implemented as follows:</p><pre><code class="lang-python">minimum_price = min(input_list)maximum_[price = max(input_list)</code></pre><p>Then get the maximum profit by finding the difference between the maximum price and minimum price. </p><pre><code class="lang-python">maximum_profit = maximum price -minimum price</code></pre><p>While finding the minimum and maximum values in the array and subtracting them to get the maximum profit might work in some cases, it is not a correct solution in all cases. This approach is not always correct as it fails to consider cases where buying the stock on a day preceding the selling day would result in a greater profit.</p><p>Consider the following example:</p><pre><code class="lang-python">[<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">6</span>, <span class="hljs-number">5</span>, <span class="hljs-number">0</span>, <span class="hljs-number">3</span>]</code></pre><p>If we simply find the minimum value (0) and the maximum value (6), we would get a profit of 6 - 0 = 6, which is incorrect. The correct maximum profit that can be made in this case is 6 - 2 = 4, by buying the stock on day 2 (price 2) and selling it on day 3 (price 6). Since you can only buy on a day preceding the selling day. Therefore, finding the minimum and maximum values in the array and subtracting them is not a correct solution for this problem. Instead, we need to use an algorithm that finds the maximum profit that can be made by buying and selling the stock once. </p><p><strong>Using One-Pass Algorithm</strong> </p><p>To overcome the limitations of the naive approach, a one-pass algorithm can be used. This algorithm processes each element of the data structure only once and keeps track of the minimum price seen so far and the maximum profit that can be made from selling the stock at the current price.</p><p>Here are the steps for implementing the one-pass algorithm:</p><ol><li>First check if the list is empty. If empty return 0 as maximum profit.<pre><code class="lang-python"><span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> prices: <span class="hljs-keyword">return</span> <span class="hljs-number">0</span></code></pre></li><li><p>Initialize the minimum price to the first element in the array.</p><pre><code class="lang-python">maximum_profit = <span class="hljs-number">0</span></code></pre></li><li><p>Traverse through the array.</p><pre><code class="lang-python"><span class="hljs-keyword">for</span> price <span class="hljs-keyword">in</span> input_list:</code></pre></li><li><p>Check if the current price is lower than the minimum price.</p><pre><code class="lang-python"> <span class="hljs-keyword">if</span> price < minimum_price:</code></pre></li><li><p>If it is, update the minimum price (since no profit can be made from a lower price.</p><pre><code class="lang-python"> minimum_price = price</code></pre></li><li><p>Else calculate the profit that can be made by selling the stock at the current price. This is the difference between the current price and the minimum price so far.</p><pre><code class="lang-python"> <span class="hljs-keyword">else</span>: profit = price - minimum_price</code></pre></li><li><p>Finally, compare the current profit with the maximum profit seen so far and update the profit if the current profit is greater.</p><pre><code class="lang-python"> <span class="hljs-keyword">if</span> profit > maximum_profit_seen: maximum_profit_seen = profit</code></pre></li><li><p>Return the maximum profit obtained.```pythonreturn maximum_profit_seen</p></li></ol><p>Using this algorithm, we can find the maximum profit that can be made by buying and selling the stock once, taking into account the constraint that the buying day must precede the selling day.</p><h1 id="heading-putting-it-altogether-code">Putting it altogether - Code</h1><pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">maximum_profit_buy</span>(<span class="hljs-params">input_list: list</span>):</span> <span class="hljs-comment"># Check if the input list is empty</span> <span class="hljs-keyword">if</span> len (input_list) == <span class="hljs-number">0</span>: <span class="hljs-keyword">return</span> <span class="hljs-number">0</span> <span class="hljs-comment"># Initialize the minimum price and maximum profit seen so far</span> minimum_price = input_list[<span class="hljs-number">0</span>] maximum_profit_seen = <span class="hljs-number">0</span> <span class="hljs-comment"># Traverse through the input list</span> <span class="hljs-keyword">for</span> price <span class="hljs-keyword">in</span> input_list: <span class="hljs-comment"># Update the minimum price seen so far</span> <span class="hljs-keyword">if</span> price < minimum_price: minimum_price = price <span class="hljs-keyword">else</span>: <span class="hljs-comment"># Calculate the profit that can be made by selling at the current price</span> profit = price - minimum_price <span class="hljs-comment"># Update the maximum profit seen so far if the current profit is greater</span> <span class="hljs-keyword">if</span> profit > maximum_profit_seen: maximum_profit_seen = profit <span class="hljs-comment"># Return the maximum profit seen so far</span> <span class="hljs-keyword">return</span> maximum_profit_seen</code></pre><h1 id="heading-testing">Testing</h1><p>Let's test the <code>maximum_profit_buy</code> function:</p><pre><code class="lang-python">print(maximum_profit_buy([<span class="hljs-number">7</span>,<span class="hljs-number">6</span>,<span class="hljs-number">4</span>,<span class="hljs-number">3</span>,<span class="hljs-number">1</span>])) <span class="hljs-comment"># Expected output: 0</span><span class="hljs-keyword">print</span> (maximum_profit_buy ([<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">7</span>, <span class="hljs-number">4</span>, <span class="hljs-number">3</span>])) <span class="hljs-comment"># Expected Output 6</span></code></pre><p>The First test case represents the array <code>[7,6,4,3,1]</code>, where the stock price decreases every day. In this case, no profit can be made, so the expected output is <code>0</code>. For the second test case, we have an array <code>[1, 2, 3, 7, 4, 3]</code> and the maximum profit that can be made by buying stock on <code>day 1</code> <code>price 1</code> is and selling it on <code>day 4</code> <code>price 7</code> is 6 which is the expected output.</p><h1 id="heading-time-and-space-complexity">Time and Space Complexity</h1><p>The function has a time complexity of O(n), where n is the length if the input array. This is so since we need to iterate through the array only once. The space complexity is O(1) since we only use a constant amount of extra space to store the minimum price seen so far and the maximum profit.</p><h1 id="heading-use-cases">Use cases</h1><p>The problem of finding the maximum profit by buying and selling a stock once is a common problem in coding interviews and competitions. It can also be used in finance and economics to analyze the performance of stocks and investments.</p><h1 id="heading-conclusion">Conclusion</h1><p>In this blog post, we explored a simple and efficient algorithm to solve the problem of finding the maximum profit that can be made by buying and selling a stock once. By using the one-pass approach and keeping track of the minimum price seen so far and the maximum profit that can be made by selling the stock at the current price, we can solve this problem in O(n) time complexity, where n is the length of the input array.</p>]]><![CDATA[<h1 id="heading-introduction">Introduction</h1><p>Welcome to Day 2 of the Blind 75 Challenge! Today I will be tackling the problem of finding the maximum profit by buying and selling stock once, a common problem in algorithm interviews and coding competitions. In this blogpost, I will explore a simple and efficient algorithm to solve this problem in Python, using only one pass/iteration through the array of stock prices.</p><h1 id="heading-problem">Problem</h1><p>You are given an array <code>prices</code> where <code>prices[i]</code> is the price of a given stock on the <code>ith</code> day.You want to maximize your profit by choosing a <em>single day</em> to buy one stock and choosing a <em>different day in the future</em> to sell that stock.Return the maximum profit you can achieve from this transaction. If you cannot achieve any profit, return 0.</p><h1 id="heading-problem-definition-and-explanation">Problem Definition and Explanation</h1><p>In this question, we are given an array of stock prices, where each element in the array represents the price in a particular day, and the <code>ith</code> index location of a day of the stock price corresponds the the <code>ith</code> day. We are expected to find a solution that will provide the maximum profit. The maximum profit here is defined as the largest difference between the largest positive number between the selling price and the buying price. As it is we would want to maximize the profit by buying the stock on one day and selling it on a different day in the future. </p><p>For example given the array below</p><pre><code class="lang-python">[<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">7</span>, <span class="hljs-number">4</span>, <span class="hljs-number">3</span>]</code></pre><p>The maximum profit would be <code>6</code>. Since the minimum price in the array is <code>1</code> and the maximum price is <code>7</code> (which comes at a later day).Again, the task is to find the solution that gets the maximum profit from the array.</p><h1 id="heading-intuition-behind-the-solution">Intuition behind the solution</h1><p><strong>Naive solution</strong>One possible approach for finding the maximum profit by buying and selling stock is to first find the minimum and maximum values in the array and then calculate the difference between them. This can be implemented as follows:</p><pre><code class="lang-python">minimum_price = min(input_list)maximum_[price = max(input_list)</code></pre><p>Then get the maximum profit by finding the difference between the maximum price and minimum price. </p><pre><code class="lang-python">maximum_profit = maximum price -minimum price</code></pre><p>While finding the minimum and maximum values in the array and subtracting them to get the maximum profit might work in some cases, it is not a correct solution in all cases. This approach is not always correct as it fails to consider cases where buying the stock on a day preceding the selling day would result in a greater profit.</p><p>Consider the following example:</p><pre><code class="lang-python">[<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">6</span>, <span class="hljs-number">5</span>, <span class="hljs-number">0</span>, <span class="hljs-number">3</span>]</code></pre><p>If we simply find the minimum value (0) and the maximum value (6), we would get a profit of 6 - 0 = 6, which is incorrect. The correct maximum profit that can be made in this case is 6 - 2 = 4, by buying the stock on day 2 (price 2) and selling it on day 3 (price 6). Since you can only buy on a day preceding the selling day. Therefore, finding the minimum and maximum values in the array and subtracting them is not a correct solution for this problem. Instead, we need to use an algorithm that finds the maximum profit that can be made by buying and selling the stock once. </p><p><strong>Using One-Pass Algorithm</strong> </p><p>To overcome the limitations of the naive approach, a one-pass algorithm can be used. This algorithm processes each element of the data structure only once and keeps track of the minimum price seen so far and the maximum profit that can be made from selling the stock at the current price.</p><p>Here are the steps for implementing the one-pass algorithm:</p><ol><li>First check if the list is empty. If empty return 0 as maximum profit.<pre><code class="lang-python"><span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> prices: <span class="hljs-keyword">return</span> <span class="hljs-number">0</span></code></pre></li><li><p>Initialize the minimum price to the first element in the array.</p><pre><code class="lang-python">maximum_profit = <span class="hljs-number">0</span></code></pre></li><li><p>Traverse through the array.</p><pre><code class="lang-python"><span class="hljs-keyword">for</span> price <span class="hljs-keyword">in</span> input_list:</code></pre></li><li><p>Check if the current price is lower than the minimum price.</p><pre><code class="lang-python"> <span class="hljs-keyword">if</span> price < minimum_price:</code></pre></li><li><p>If it is, update the minimum price (since no profit can be made from a lower price.</p><pre><code class="lang-python"> minimum_price = price</code></pre></li><li><p>Else calculate the profit that can be made by selling the stock at the current price. This is the difference between the current price and the minimum price so far.</p><pre><code class="lang-python"> <span class="hljs-keyword">else</span>: profit = price - minimum_price</code></pre></li><li><p>Finally, compare the current profit with the maximum profit seen so far and update the profit if the current profit is greater.</p><pre><code class="lang-python"> <span class="hljs-keyword">if</span> profit > maximum_profit_seen: maximum_profit_seen = profit</code></pre></li><li><p>Return the maximum profit obtained.```pythonreturn maximum_profit_seen</p></li></ol><p>Using this algorithm, we can find the maximum profit that can be made by buying and selling the stock once, taking into account the constraint that the buying day must precede the selling day.</p><h1 id="heading-putting-it-altogether-code">Putting it altogether - Code</h1><pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">maximum_profit_buy</span>(<span class="hljs-params">input_list: list</span>):</span> <span class="hljs-comment"># Check if the input list is empty</span> <span class="hljs-keyword">if</span> len (input_list) == <span class="hljs-number">0</span>: <span class="hljs-keyword">return</span> <span class="hljs-number">0</span> <span class="hljs-comment"># Initialize the minimum price and maximum profit seen so far</span> minimum_price = input_list[<span class="hljs-number">0</span>] maximum_profit_seen = <span class="hljs-number">0</span> <span class="hljs-comment"># Traverse through the input list</span> <span class="hljs-keyword">for</span> price <span class="hljs-keyword">in</span> input_list: <span class="hljs-comment"># Update the minimum price seen so far</span> <span class="hljs-keyword">if</span> price < minimum_price: minimum_price = price <span class="hljs-keyword">else</span>: <span class="hljs-comment"># Calculate the profit that can be made by selling at the current price</span> profit = price - minimum_price <span class="hljs-comment"># Update the maximum profit seen so far if the current profit is greater</span> <span class="hljs-keyword">if</span> profit > maximum_profit_seen: maximum_profit_seen = profit <span class="hljs-comment"># Return the maximum profit seen so far</span> <span class="hljs-keyword">return</span> maximum_profit_seen</code></pre><h1 id="heading-testing">Testing</h1><p>Let's test the <code>maximum_profit_buy</code> function:</p><pre><code class="lang-python">print(maximum_profit_buy([<span class="hljs-number">7</span>,<span class="hljs-number">6</span>,<span class="hljs-number">4</span>,<span class="hljs-number">3</span>,<span class="hljs-number">1</span>])) <span class="hljs-comment"># Expected output: 0</span><span class="hljs-keyword">print</span> (maximum_profit_buy ([<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">7</span>, <span class="hljs-number">4</span>, <span class="hljs-number">3</span>])) <span class="hljs-comment"># Expected Output 6</span></code></pre><p>The First test case represents the array <code>[7,6,4,3,1]</code>, where the stock price decreases every day. In this case, no profit can be made, so the expected output is <code>0</code>. For the second test case, we have an array <code>[1, 2, 3, 7, 4, 3]</code> and the maximum profit that can be made by buying stock on <code>day 1</code> <code>price 1</code> is and selling it on <code>day 4</code> <code>price 7</code> is 6 which is the expected output.</p><h1 id="heading-time-and-space-complexity">Time and Space Complexity</h1><p>The function has a time complexity of O(n), where n is the length if the input array. This is so since we need to iterate through the array only once. The space complexity is O(1) since we only use a constant amount of extra space to store the minimum price seen so far and the maximum profit.</p><h1 id="heading-use-cases">Use cases</h1><p>The problem of finding the maximum profit by buying and selling a stock once is a common problem in coding interviews and competitions. It can also be used in finance and economics to analyze the performance of stocks and investments.</p><h1 id="heading-conclusion">Conclusion</h1><p>In this blog post, we explored a simple and efficient algorithm to solve the problem of finding the maximum profit that can be made by buying and selling a stock once. By using the one-pass approach and keeping track of the minimum price seen so far and the maximum profit that can be made by selling the stock at the current price, we can solve this problem in O(n) time complexity, where n is the length of the input array.</p>]]><![CDATA[Day 1 [Blind 75][LeetCode] Two Sum Problem: Using Hash Tables to Find Pairs of Integers That Add Up to a Target Value]]>https://samueladebayo.com/blind75-day1-two-sum-pythonhttps://samueladebayo.com/blind75-day1-two-sum-pythonWed, 19 Apr 2023 18:25:38 GMT<![CDATA[<h1 id="heading-problem">Problem</h1><p>Given an array of integers <code>num</code> and an integer <code>target</code>, return indices of the two numbers such that they add up to <code>target</code>.You may assume that each input would have exactly one solution and you may not use the same element twice.You can assume that the given input array is not sorted.</p><h1 id="heading-problem-definition-and-explanation">Problem definition and explanation.</h1><p>The two-sum problem as it is widely called is a classic coding challenge that requires finding two integers in a given list that add up to a target value. The problem is often presented in different technical contexts, for example in algorithmic design, data structures, and optimization or even in the form of interview questions for most software engineering positions.</p><p>Now, to the main thing, this problem requires us to find two integers provided they are present in the given list that add up to a given target value. So for example if given <code>example_list</code> and a <code>target_int</code> below: </p><pre><code class="lang-python">example_list = [<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>, <span class="hljs-number">9</span>]target_int = <span class="hljs-number">9</span></code></pre><p>You would be expected to come up with a code that returns the index location of <code>3</code> and <code>6</code> since these are the integers that add up to the target integer, such that your return value is: <code>[1, 2]</code></p><h1 id="heading-intuition-behind-the-solution">Intuition behind the solution</h1><p><strong>layman's thought</strong></p><p>When I first approached the Two-Sum problem, my initial thought was to find a way to map each number in the input list to its corresponding index location. I realized that this could be achieved by creating a table or dictionary that stores each number as a key and its corresponding index as the value. Such that for the list below:</p><pre><code class="lang-python">example_list = [<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>, <span class="hljs-number">9</span>]</code></pre><p>you would have a table similar to the one below:</p><div class="hn-table"><table><thead><tr><td>Elements</td><td>Index Location</td></tr></thead><tbody><tr><td>2</td><td>0</td></tr><tr><td>3</td><td>1</td></tr><tr><td>6</td><td>2</td></tr><tr><td>9</td><td>3</td></tr></tbody></table></div><p>Next, I iterated over the input list and for each number, I calculated the difference between that number and the target integer. I then checked if this difference exists in the input list (excluding the current number being checked). If the difference was found in the list, I used the table or dictionary I created earlier to find the index location of the number that makes up the target sum. This gave me the indices of the two numbers that add up to the target value.</p><p>In summary, my solution involved creating a table or dictionary that maps each number to its corresponding index location in the input list, and then iterating over the list to find the difference between each number and the target integer. I then used the table or dictionary to find the location of the number that makes up the target sum.</p><p><strong>Pythonic thoughts</strong></p><p>The table can be presented in Python as a hash table or dictionary data structure that maps each integer in the input list to its corresponding index location. This will enable us to access the index location of any integer in constant time. To do this, I created a dictionary variable that will store the integers as keys and index location as values. In Python the index location and elements can be gotten using the <code>enumerate</code> method. This will return both the index location and element while iterating through a list: </p><pre><code class="lang-python">cache = {el: en <span class="hljs-keyword">for</span> en, el <span class="hljs-keyword">in</span> enumerate(input_list)}</code></pre><p>Next, iterate over the input list and for each integer, calculate the difference between that integer and the target(given):</p><pre><code class="lang-python"> <span class="hljs-keyword">for</span> en, int_1 <span class="hljs-keyword">in</span> enumerate(input_list):</code></pre><p>Next, check if the difference exists in the input list (excluding the current integer being checked). This is achieved by looking it up in the dictionary. This search operation takes constant time. </p><pre><code class="lang-python"> <span class="hljs-keyword">if</span> (target_int - int_1) <span class="hljs-keyword">in</span> input_list : <span class="hljs-keyword">if</span> cache[target_int - int_1] != en:</code></pre><p>If this search operation is successful and the difference is found in the input list, use the dictionary to look up the index location of the integer that makes up the sum. </p><pre><code class="lang-python"> <span class="hljs-keyword">return</span> [cache[int_1], cache[target_int-int_1]]</code></pre><p>If no match is found, i.e. no two integers add up to the target value, we return an empty list. </p><pre><code class="lang-python"> <span class="hljs-keyword">return</span> []</code></pre><h1 id="heading-putting-it-altogether-code">Putting it altogether - code</h1><pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">two_sum</span>(<span class="hljs-params">input_list: list, target_int:int</span>):</span> <span class="hljs-comment"># Create a hash table or dictionary that maps each integer to its index location</span> cache = {el:en <span class="hljs-keyword">for</span> en, el <span class="hljs-keyword">in</span> enumerate(input_list)} <span class="hljs-comment"># Iterate over the input list and check for the sum of two integers that equals the target value</span> <span class="hljs-keyword">for</span> en, int_1 <span class="hljs-keyword">in</span> enumerate(input_list): <span class="hljs-keyword">if</span> (target_int - int_1) <span class="hljs-keyword">in</span> input_list: <span class="hljs-comment"># Check that the two integers are not the same</span> <span class="hljs-keyword">if</span> cache[target_int - int_1] != en: <span class="hljs-comment"># Return the indices of the two integers that add up to the target value</span> <span class="hljs-keyword">return</span> [cache[int_1], cache[target_int-int_1]] <span class="hljs-comment"># Return an empty list if no two integers add up to the target value</span> <span class="hljs-keyword">return</span> []</code></pre><h1 id="heading-testing">Testing</h1><p>To test if the code works:</p><pre><code class="lang-python"><span class="hljs-comment"># Example usage</span>list_1 = [<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>, <span class="hljs-number">9</span>]print(two_sum(list_1, <span class="hljs-number">9</span>)) <span class="hljs-comment"># Output: [1, 2]</span></code></pre><h1 id="heading-time-and-space-complexity">Time and Space Complexity</h1><p>The time complexity of this solution is O(n), where n is the length of the input list, and the space complexity is also O(n) since we need to store each integer in the input list as a key in the dictionary.</p><h1 id="heading-use-cases">Use cases</h1><p>The two-sum problem is a common problem in computer science and is used in many real-world applications. For example, in financial applications, we can use the two-sum problem to find pairs of stocks that add up to a given target value. In image processing, we can use the two-sum problem to find pairs of pixels that add up to a given target colour.</p><h1 id="heading-conclusion">Conclusion</h1><p>In this blog post, we discussed the two-sum problem, the intuition behind solving it, and how to solve it using a dictionary/hash table. We saw that this problem has a time complexity of O(n) and a space complexity of O(n). We also discussed some use cases of the two-sum problem in real-world applications.</p>]]><![CDATA[<h1 id="heading-problem">Problem</h1><p>Given an array of integers <code>num</code> and an integer <code>target</code>, return indices of the two numbers such that they add up to <code>target</code>.You may assume that each input would have exactly one solution and you may not use the same element twice.You can assume that the given input array is not sorted.</p><h1 id="heading-problem-definition-and-explanation">Problem definition and explanation.</h1><p>The two-sum problem as it is widely called is a classic coding challenge that requires finding two integers in a given list that add up to a target value. The problem is often presented in different technical contexts, for example in algorithmic design, data structures, and optimization or even in the form of interview questions for most software engineering positions.</p><p>Now, to the main thing, this problem requires us to find two integers provided they are present in the given list that add up to a given target value. So for example if given <code>example_list</code> and a <code>target_int</code> below: </p><pre><code class="lang-python">example_list = [<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>, <span class="hljs-number">9</span>]target_int = <span class="hljs-number">9</span></code></pre><p>You would be expected to come up with a code that returns the index location of <code>3</code> and <code>6</code> since these are the integers that add up to the target integer, such that your return value is: <code>[1, 2]</code></p><h1 id="heading-intuition-behind-the-solution">Intuition behind the solution</h1><p><strong>layman's thought</strong></p><p>When I first approached the Two-Sum problem, my initial thought was to find a way to map each number in the input list to its corresponding index location. I realized that this could be achieved by creating a table or dictionary that stores each number as a key and its corresponding index as the value. Such that for the list below:</p><pre><code class="lang-python">example_list = [<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>, <span class="hljs-number">9</span>]</code></pre><p>you would have a table similar to the one below:</p><div class="hn-table"><table><thead><tr><td>Elements</td><td>Index Location</td></tr></thead><tbody><tr><td>2</td><td>0</td></tr><tr><td>3</td><td>1</td></tr><tr><td>6</td><td>2</td></tr><tr><td>9</td><td>3</td></tr></tbody></table></div><p>Next, I iterated over the input list and for each number, I calculated the difference between that number and the target integer. I then checked if this difference exists in the input list (excluding the current number being checked). If the difference was found in the list, I used the table or dictionary I created earlier to find the index location of the number that makes up the target sum. This gave me the indices of the two numbers that add up to the target value.</p><p>In summary, my solution involved creating a table or dictionary that maps each number to its corresponding index location in the input list, and then iterating over the list to find the difference between each number and the target integer. I then used the table or dictionary to find the location of the number that makes up the target sum.</p><p><strong>Pythonic thoughts</strong></p><p>The table can be presented in Python as a hash table or dictionary data structure that maps each integer in the input list to its corresponding index location. This will enable us to access the index location of any integer in constant time. To do this, I created a dictionary variable that will store the integers as keys and index location as values. In Python the index location and elements can be gotten using the <code>enumerate</code> method. This will return both the index location and element while iterating through a list: </p><pre><code class="lang-python">cache = {el: en <span class="hljs-keyword">for</span> en, el <span class="hljs-keyword">in</span> enumerate(input_list)}</code></pre><p>Next, iterate over the input list and for each integer, calculate the difference between that integer and the target(given):</p><pre><code class="lang-python"> <span class="hljs-keyword">for</span> en, int_1 <span class="hljs-keyword">in</span> enumerate(input_list):</code></pre><p>Next, check if the difference exists in the input list (excluding the current integer being checked). This is achieved by looking it up in the dictionary. This search operation takes constant time. </p><pre><code class="lang-python"> <span class="hljs-keyword">if</span> (target_int - int_1) <span class="hljs-keyword">in</span> input_list : <span class="hljs-keyword">if</span> cache[target_int - int_1] != en:</code></pre><p>If this search operation is successful and the difference is found in the input list, use the dictionary to look up the index location of the integer that makes up the sum. </p><pre><code class="lang-python"> <span class="hljs-keyword">return</span> [cache[int_1], cache[target_int-int_1]]</code></pre><p>If no match is found, i.e. no two integers add up to the target value, we return an empty list. </p><pre><code class="lang-python"> <span class="hljs-keyword">return</span> []</code></pre><h1 id="heading-putting-it-altogether-code">Putting it altogether - code</h1><pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">two_sum</span>(<span class="hljs-params">input_list: list, target_int:int</span>):</span> <span class="hljs-comment"># Create a hash table or dictionary that maps each integer to its index location</span> cache = {el:en <span class="hljs-keyword">for</span> en, el <span class="hljs-keyword">in</span> enumerate(input_list)} <span class="hljs-comment"># Iterate over the input list and check for the sum of two integers that equals the target value</span> <span class="hljs-keyword">for</span> en, int_1 <span class="hljs-keyword">in</span> enumerate(input_list): <span class="hljs-keyword">if</span> (target_int - int_1) <span class="hljs-keyword">in</span> input_list: <span class="hljs-comment"># Check that the two integers are not the same</span> <span class="hljs-keyword">if</span> cache[target_int - int_1] != en: <span class="hljs-comment"># Return the indices of the two integers that add up to the target value</span> <span class="hljs-keyword">return</span> [cache[int_1], cache[target_int-int_1]] <span class="hljs-comment"># Return an empty list if no two integers add up to the target value</span> <span class="hljs-keyword">return</span> []</code></pre><h1 id="heading-testing">Testing</h1><p>To test if the code works:</p><pre><code class="lang-python"><span class="hljs-comment"># Example usage</span>list_1 = [<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>, <span class="hljs-number">9</span>]print(two_sum(list_1, <span class="hljs-number">9</span>)) <span class="hljs-comment"># Output: [1, 2]</span></code></pre><h1 id="heading-time-and-space-complexity">Time and Space Complexity</h1><p>The time complexity of this solution is O(n), where n is the length of the input list, and the space complexity is also O(n) since we need to store each integer in the input list as a key in the dictionary.</p><h1 id="heading-use-cases">Use cases</h1><p>The two-sum problem is a common problem in computer science and is used in many real-world applications. For example, in financial applications, we can use the two-sum problem to find pairs of stocks that add up to a given target value. In image processing, we can use the two-sum problem to find pairs of pixels that add up to a given target colour.</p><h1 id="heading-conclusion">Conclusion</h1><p>In this blog post, we discussed the two-sum problem, the intuition behind solving it, and how to solve it using a dictionary/hash table. We saw that this problem has a time complexity of O(n) and a space complexity of O(n). We also discussed some use cases of the two-sum problem in real-world applications.</p>]]><![CDATA[Reading List for February - July]]>https://samueladebayo.com/reading-list-for-february-julyhttps://samueladebayo.com/reading-list-for-february-julySat, 04 Feb 2023 05:39:19 GMT<![CDATA[<p>1. "<strong>Beyond Order: 12 More Rules for Life</strong>" by Jordan B. Peterson - A re-read for reinforcing my focus and perspectives.</p><p>2. "<strong>Mao: The Unknown Story</strong>" by Jung Chang - A historical exploration of Mao Zedong's life and impact on China.</p><p>3. "<strong>The Irish Difference: A Tumultuous History of Irish breakup with Britain</strong>" by Fergal Tobin - An in-depth examination of Irish culture, including its historical background and unique characteristics.</p><p>4. "<strong>Multiple View Geometry in Computer Vision</strong>" by Richard Hartley and Andrew Zisserman - I have read papers from both authors, fascinated by their works.</p><p>5. "<strong>Bayesian Reasoning and Machine Learning</strong>" by David Barber - the future is plagued with uncertainty and so is our physical world. Building an interactive machine for our physical world requires understanding uncertainties and mitigating their ripple effects. An exploration of the integration of Bayesian reasoning and machine learning for modeling uncertain systems and mitigating their potential impact.</p>]]><![CDATA[<p>1. "<strong>Beyond Order: 12 More Rules for Life</strong>" by Jordan B. Peterson - A re-read for reinforcing my focus and perspectives.</p><p>2. "<strong>Mao: The Unknown Story</strong>" by Jung Chang - A historical exploration of Mao Zedong's life and impact on China.</p><p>3. "<strong>The Irish Difference: A Tumultuous History of Irish breakup with Britain</strong>" by Fergal Tobin - An in-depth examination of Irish culture, including its historical background and unique characteristics.</p><p>4. "<strong>Multiple View Geometry in Computer Vision</strong>" by Richard Hartley and Andrew Zisserman - I have read papers from both authors, fascinated by their works.</p><p>5. "<strong>Bayesian Reasoning and Machine Learning</strong>" by David Barber - the future is plagued with uncertainty and so is our physical world. Building an interactive machine for our physical world requires understanding uncertainties and mitigating their ripple effects. An exploration of the integration of Bayesian reasoning and machine learning for modeling uncertain systems and mitigating their potential impact.</p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1675489040790/9522c916-04f9-4121-8bce-99f00416e7e4.jpeg<![CDATA[About Equality and Equity]]>https://samueladebayo.com/about-equality-and-equityhttps://samueladebayo.com/about-equality-and-equitySat, 14 Jan 2023 13:09:45 GMT<![CDATA[<p>The thoughts expressed here are mine and do not in any way represent that of my university, employers, hierarchy, or close associates. Additionally, I am no expert in this field, it is only from observations and personal experiences that I have drawn my opinions. </p><p>For the last few months, I have been bothered by the ideology peddled by most employers of labour. This very concern has led me to ask a not-so-popular question am I being approached for employment or opportunities because of the colour of my skin? Perhaps, this question is popular howbeit in the minds of the most concerned few. This rather daunting question, even led to a more stomach-turning one is this equity or equality If my question turns out to be true? Of course, if False, am I being headhunted because of my intelligence, skills, and diversity of my uniqueness? Or is it rather because of prejudice? If True, does this mean I am privileged and profiting from an undeserved opportunity?</p><p>As a researcher, when I am faced with a challenging technical problem, especially the ones that leave me tasking for days, I am led to examine the base class. In object-oriented programming, a base class is a fundamental template or blueprint on which other classes are built. These newly created classes inherit functionalities, methods, and principles from the base class. It is also to be noted that new methods, principles, and ideology can be created which can override the inherited methods. Please hold this thought as this will make more sense soon. Back to my original ponder, the questions I have asked myself for weeks have led me to this one question. Which is best Equality or Equity? Or succinctly put Which is the more noble, just, and fair goal- Equity or Equality? While both have inherited the ideas of social justice, fairness of rights and opportunities, one more than the other, is overriding the very fundamental truth of the base class while claiming to belong to the base class.</p><p>As a society, we constantly debate over whether equality or equity is the more desirable goal. On the surface, the two concepts may appear to be interchangeable, but upon deeper examination, it becomes clear that they represent fundamentally different ways of thinking about the world.</p><p>Equality is the absolute ideal that everyone should be treated equally, regardless of background or characteristics. This is a noble goal and one that is deeply ingrained in our culture. The idea that all people should be treated with dignity and respect is a fundamental principle of democracy. However, the problem with this approach is that it assumes that everyone starts from the same place and that the same opportunities are available to everyone. </p><p>This is a fallacy. In fact, there is no one equal person and in truth, people have different starting points and challenges to overcome. Some individuals may have had a privileged upbringing, while others may have struggled with poverty or discrimination. Treating everyone the same, without taking these differences into account, can perpetuate inequality, defeating the main purpose of fairness, diversity, and inclusion.</p><p>Equity, however, is the idea that everyone should have and be provided/presented with an equal opportunity to succeed. This means that individuals and groups who have been traditionally marginalized may require additional resources or support to achieve the same level of success as those who have not faced such barriers.</p><p>To achieve equity, we must be willing to acknowledge and address the ways in which structural inequalities exist in our society. This requires us to take a step back and examine the systems and institutions that shape our lives. We must ask ourselves: Are the playing field and opportunities equal for all individuals? Are certain groups or individuals facing barriers or discrimination that make it harder for them to succeed? It is only by acknowledging these difficult truths and taking steps to address them, that we can truly achieve a society that is fair and just for all. Equality may be a nice idea, but it is not enough. We must strive for equity if we are to create a society in which everyone can reach their full potential.</p><p>It is however important to note that achieving equity does not mean that everyone will have the same outcome, but rather that everyone will have the same opportunity to succeed. This means that some individuals may still achieve more success than others, but it will not be due to systemic barriers or discrimination that has constantly plagued our society. More importantly, equity is not about granting preferential treatment to certain groups or individuals, but rather about levelling the playing field and providing the necessary resources and support to overcome barriers.</p><p>Additionally, equity must be seen as an ongoing-continuous process, as society is ever-changing and dynamic- opportunities to address inequalities, challenges, and discrimination will always arise. In practice, achieving equity may involve a variety of actions, such as implementing policies and practices that promote diversity and inclusion, creating more accessible educational and job training programs, and addressing biases in hiring and promotion practices.</p><p>Ultimately, the goal of equity is to create a society in which everyone can reach their full potential, regardless of their background or characteristics. It's not only morally right but also beneficial for society, as a diverse and inclusive society is more productive and innovative.</p><p>In conclusion, while equality is a very noble goal, it is not enough to achieve a truly just and fair society. Equity is the more desirable goal, as it acknowledges and addresses the structural inequalities that exist in our society and ensures that everyone has an equal opportunity to succeed. This requires us to be willing to look beyond the surface and examine the systems and institutions that shape our lives. Only by achieving equity can we create a society in which everyone can thrive.</p>]]><![CDATA[<p>The thoughts expressed here are mine and do not in any way represent that of my university, employers, hierarchy, or close associates. Additionally, I am no expert in this field, it is only from observations and personal experiences that I have drawn my opinions. </p><p>For the last few months, I have been bothered by the ideology peddled by most employers of labour. This very concern has led me to ask a not-so-popular question am I being approached for employment or opportunities because of the colour of my skin? Perhaps, this question is popular howbeit in the minds of the most concerned few. This rather daunting question, even led to a more stomach-turning one is this equity or equality If my question turns out to be true? Of course, if False, am I being headhunted because of my intelligence, skills, and diversity of my uniqueness? Or is it rather because of prejudice? If True, does this mean I am privileged and profiting from an undeserved opportunity?</p><p>As a researcher, when I am faced with a challenging technical problem, especially the ones that leave me tasking for days, I am led to examine the base class. In object-oriented programming, a base class is a fundamental template or blueprint on which other classes are built. These newly created classes inherit functionalities, methods, and principles from the base class. It is also to be noted that new methods, principles, and ideology can be created which can override the inherited methods. Please hold this thought as this will make more sense soon. Back to my original ponder, the questions I have asked myself for weeks have led me to this one question. Which is best Equality or Equity? Or succinctly put Which is the more noble, just, and fair goal- Equity or Equality? While both have inherited the ideas of social justice, fairness of rights and opportunities, one more than the other, is overriding the very fundamental truth of the base class while claiming to belong to the base class.</p><p>As a society, we constantly debate over whether equality or equity is the more desirable goal. On the surface, the two concepts may appear to be interchangeable, but upon deeper examination, it becomes clear that they represent fundamentally different ways of thinking about the world.</p><p>Equality is the absolute ideal that everyone should be treated equally, regardless of background or characteristics. This is a noble goal and one that is deeply ingrained in our culture. The idea that all people should be treated with dignity and respect is a fundamental principle of democracy. However, the problem with this approach is that it assumes that everyone starts from the same place and that the same opportunities are available to everyone. </p><p>This is a fallacy. In fact, there is no one equal person and in truth, people have different starting points and challenges to overcome. Some individuals may have had a privileged upbringing, while others may have struggled with poverty or discrimination. Treating everyone the same, without taking these differences into account, can perpetuate inequality, defeating the main purpose of fairness, diversity, and inclusion.</p><p>Equity, however, is the idea that everyone should have and be provided/presented with an equal opportunity to succeed. This means that individuals and groups who have been traditionally marginalized may require additional resources or support to achieve the same level of success as those who have not faced such barriers.</p><p>To achieve equity, we must be willing to acknowledge and address the ways in which structural inequalities exist in our society. This requires us to take a step back and examine the systems and institutions that shape our lives. We must ask ourselves: Are the playing field and opportunities equal for all individuals? Are certain groups or individuals facing barriers or discrimination that make it harder for them to succeed? It is only by acknowledging these difficult truths and taking steps to address them, that we can truly achieve a society that is fair and just for all. Equality may be a nice idea, but it is not enough. We must strive for equity if we are to create a society in which everyone can reach their full potential.</p><p>It is however important to note that achieving equity does not mean that everyone will have the same outcome, but rather that everyone will have the same opportunity to succeed. This means that some individuals may still achieve more success than others, but it will not be due to systemic barriers or discrimination that has constantly plagued our society. More importantly, equity is not about granting preferential treatment to certain groups or individuals, but rather about levelling the playing field and providing the necessary resources and support to overcome barriers.</p><p>Additionally, equity must be seen as an ongoing-continuous process, as society is ever-changing and dynamic- opportunities to address inequalities, challenges, and discrimination will always arise. In practice, achieving equity may involve a variety of actions, such as implementing policies and practices that promote diversity and inclusion, creating more accessible educational and job training programs, and addressing biases in hiring and promotion practices.</p><p>Ultimately, the goal of equity is to create a society in which everyone can reach their full potential, regardless of their background or characteristics. It's not only morally right but also beneficial for society, as a diverse and inclusive society is more productive and innovative.</p><p>In conclusion, while equality is a very noble goal, it is not enough to achieve a truly just and fair society. Equity is the more desirable goal, as it acknowledges and addresses the structural inequalities that exist in our society and ensures that everyone has an equal opportunity to succeed. This requires us to be willing to look beyond the surface and examine the systems and institutions that shape our lives. Only by achieving equity can we create a society in which everyone can thrive.</p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1673701422158/9d49156e-15c6-4fde-b480-301e1120914d.png<![CDATA[Ormedian-Utils, My first python Package]]>https://samueladebayo.com/ormedian-utilshttps://samueladebayo.com/ormedian-utilsSat, 03 Sep 2022 01:02:38 GMT<![CDATA[<p>Finally got around to publishing my first Python package and it is for basic computer vision tasks. </p><p>Over the last couple of months, I realised that I have had to write the same codes all over again and manually do some tasks that I could have easily automated. These usually lead to a lot of boilerplate codes. Being the curious mind that I am- I decided to write a package that I could easily install on my PC and run these tasks whenever I so desire. I can also plug this package as part of a larger project code base. </p><p>So, here is introducing <code>[Ormedian-Utils](https://pypi.org/project/ormedian-utils/#description)</code> a python package for basic CV tasks. </p><p><a target="_blank" href="https://user-images.githubusercontent.com/73752977/188218332-96f6766d-3f7f-4eb0-89f2-c2b58a08c375.mp4"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662166650958/c912nlwWJ.png" alt="Screenshot 2022-09-03 at 01.52.30.png" /></a></p><p>As of now <code>ormedian-utils</code> can do the following</p><ul><li>Save frames from videos, camera feed or a folder containing more than one video.</li><li>move specific files from myriads of other files.</li><li>Resize images in folders or multiple folders.</li></ul><p>Read the docs <a target="_blank" href="https://github.com/exponentialR/ormedian-utils">here</a> .</p><p>I hope you find the package useful.</p>]]><![CDATA[<p>Finally got around to publishing my first Python package and it is for basic computer vision tasks. </p><p>Over the last couple of months, I realised that I have had to write the same codes all over again and manually do some tasks that I could have easily automated. These usually lead to a lot of boilerplate codes. Being the curious mind that I am- I decided to write a package that I could easily install on my PC and run these tasks whenever I so desire. I can also plug this package as part of a larger project code base. </p><p>So, here is introducing <code>[Ormedian-Utils](https://pypi.org/project/ormedian-utils/#description)</code> a python package for basic CV tasks. </p><p><a target="_blank" href="https://user-images.githubusercontent.com/73752977/188218332-96f6766d-3f7f-4eb0-89f2-c2b58a08c375.mp4"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662166650958/c912nlwWJ.png" alt="Screenshot 2022-09-03 at 01.52.30.png" /></a></p><p>As of now <code>ormedian-utils</code> can do the following</p><ul><li>Save frames from videos, camera feed or a folder containing more than one video.</li><li>move specific files from myriads of other files.</li><li>Resize images in folders or multiple folders.</li></ul><p>Read the docs <a target="_blank" href="https://github.com/exponentialR/ormedian-utils">here</a> .</p><p>I hope you find the package useful.</p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1662166525027/ZuXoErJGJ.png<![CDATA[August Quote]]>https://samueladebayo.com/august-quotehttps://samueladebayo.com/august-quoteTue, 09 Aug 2022 12:57:54 GMT<![CDATA[<p>It's been a rough ride (3 months straight). Believe me, it is still rough!</p><p>But this quote has stayed with me for the last 3 months.</p><p><em><code>If you ever feel like a failure, remember failure is part of succeeding</code></em> </p><p>This has kept me going. I hope whatever you are facing now or will face in the future you will have the strength to pull through and come to the realisation that it's all part of the process. 9th, August 2022.</p>]]><![CDATA[<p>It's been a rough ride (3 months straight). Believe me, it is still rough!</p><p>But this quote has stayed with me for the last 3 months.</p><p><em><code>If you ever feel like a failure, remember failure is part of succeeding</code></em> </p><p>This has kept me going. I hope whatever you are facing now or will face in the future you will have the strength to pull through and come to the realisation that it's all part of the process. 9th, August 2022.</p>]]><![CDATA[Ongoing Research Week .07.22]]>https://samueladebayo.com/ongoing-research-week-0722https://samueladebayo.com/ongoing-research-week-0722Thu, 21 Jul 2022 12:32:18 GMT<![CDATA[<p> I recently released a paper and presented same at the 6th IFAC conference, Romania. To see the paper, <a target="_blank" href="https://pureadmin.qub.ac.uk/ws/portalfiles/portal/328102345/Hand_Eye_Object_Tracking_for_Human_Intention_Inference_final.pdf">check here for Hand-Eye-Object Tracking for Human Intention Inference </a>, a novel learning-based human intention inference from a fusion of 3 visual cues.</p><p> I am currently working on Self-Learn-Your-Key Gaze (<em>SLYKGaze</em>), A Gaze Estimation Technique that minimizes domain expertise limitation and aleatoric uncertainties in learning-based gaze estimation. </p><p> I will be joining <a target="_blank" href="https://www.belfastmet.ac.uk/">Belfast Metropolitan College</a> in September 2022 as a Part-Time Lecturer in Machine Learning, and Part-Time Lecturer in Python. </p><p> In July 2022, our ethics application was accepted, hence we are set to collect the first-of-its-kind dataset.</p><h4 id="heading-ongoing-research">Ongoing Research</h4><div class="hn-table"><table><thead><tr><td>Status</td><td>Milestone</td><td>Goals</td><td>ETA</td></tr></thead><tbody><tr><td>🚀</td><td><strong><a class="post-section-overview" href="#https://pureadmin.qub.ac.uk/ws/portalfiles/portal/328102345/Hand_Eye_Object_Tracking_for_Human_Intention_Inference_final.pdf">H-E-O</a></strong></td><td>0 / 1</td><td>March 2022</td></tr><tr><td>🚀</td><td><strong>SLYKGaze</strong></td><td>5 / 10</td><td><code>in progress</code></td></tr><tr><td>🚀</td><td><strong>HRI Dataset</strong></td><td>4/ 10</td><td><code>in progress</code></td></tr><tr><td>🚀</td><td><strong><a target="_blank" href="https://www.qub.ac.uk/sites/iams/Capabilities/ResearchTeam/FedericoZocco/">RDSH - A deep neural network for pose estimation with densenet as the backbone. collaborative work with [Federico Zocco]</a></strong></td><td>1 / 3</td><td><code>ongoing</code></td></tr></tbody></table></div>]]><![CDATA[<p> I recently released a paper and presented same at the 6th IFAC conference, Romania. To see the paper, <a target="_blank" href="https://pureadmin.qub.ac.uk/ws/portalfiles/portal/328102345/Hand_Eye_Object_Tracking_for_Human_Intention_Inference_final.pdf">check here for Hand-Eye-Object Tracking for Human Intention Inference </a>, a novel learning-based human intention inference from a fusion of 3 visual cues.</p><p> I am currently working on Self-Learn-Your-Key Gaze (<em>SLYKGaze</em>), A Gaze Estimation Technique that minimizes domain expertise limitation and aleatoric uncertainties in learning-based gaze estimation. </p><p> I will be joining <a target="_blank" href="https://www.belfastmet.ac.uk/">Belfast Metropolitan College</a> in September 2022 as a Part-Time Lecturer in Machine Learning, and Part-Time Lecturer in Python. </p><p> In July 2022, our ethics application was accepted, hence we are set to collect the first-of-its-kind dataset.</p><h4 id="heading-ongoing-research">Ongoing Research</h4><div class="hn-table"><table><thead><tr><td>Status</td><td>Milestone</td><td>Goals</td><td>ETA</td></tr></thead><tbody><tr><td>🚀</td><td><strong><a class="post-section-overview" href="#https://pureadmin.qub.ac.uk/ws/portalfiles/portal/328102345/Hand_Eye_Object_Tracking_for_Human_Intention_Inference_final.pdf">H-E-O</a></strong></td><td>0 / 1</td><td>March 2022</td></tr><tr><td>🚀</td><td><strong>SLYKGaze</strong></td><td>5 / 10</td><td><code>in progress</code></td></tr><tr><td>🚀</td><td><strong>HRI Dataset</strong></td><td>4/ 10</td><td><code>in progress</code></td></tr><tr><td>🚀</td><td><strong><a target="_blank" href="https://www.qub.ac.uk/sites/iams/Capabilities/ResearchTeam/FedericoZocco/">RDSH - A deep neural network for pose estimation with densenet as the backbone. collaborative work with [Federico Zocco]</a></strong></td><td>1 / 3</td><td><code>ongoing</code></td></tr></tbody></table></div>]]>