The phenomenon of digital addiction is not new, yet its scale and impact are escalating rapidly. Smartphones, with their endless applications, instant connectivity, and the lure of social media, have become a constant companion for many, offering both the illusion of connection and the reality of isolation. This paradox lies at the heart of the issue, where the tool designed to connect us to the world also distances us from it. The psychological ramifications of this addiction are profound. From reduced attention spans and disrupted sleep patterns to heightened anxiety and depression, the effects are pervasive. The constant barrage of notifications and the compulsion to remain continually connected disrupt mental peace and personal relationships, leading to a cycle of dependency that is hard to break.

The proposal to establish rehabilitation centers for digital addiction might once have seemed far-fetched, yet it is becoming increasingly necessary. These facilities would not merely serve as a retreat from technology but as centers for relearning the art of living. Through counseling, digital detox programs, and the teaching of mindfulness and social skills, individuals can reclaim their autonomy over technology, rather than being ruled by it.

Addressing this pandemic requires a collective effort. It calls for awareness, education, and proactive measures from all sectors of society. Parents, educators, policymakers, and technology creators must work in tandem to create a balanced digital environment. Teaching digital literacy and fostering environments that encourage face-to-face interactions are crucial steps in this direction.

As we stand on the precipice of this digital pandemic, it is imperative to recognize and act upon the challenges it presents. The establishment of rehabilitation centers, while a necessary measure, is but a part of the solution. The ultimate goal should be to foster a society where technology serves to enhance human interactions, not replace them. By embedding ethical considerations in the design of technology and promoting a culture that values personal connections, we can mitigate the effects of digital addiction.

This impending pandemic of digital addiction is a clarion call to reassess our relationship with technology; a reminder that in our quest to connect digitally, we must not sever our ties with the very essence of human experiencereal, tangible interactions. The time to act is now, lest we find ourselves ensnared in the very web we have woven.

]]>In the ever-evolving landscape of computer vision, the transition from static imagery to the dynamic world of videos marks a significant leap to understanding the dynamicity of our world. As someone who has spent years unravelling the `mysteries`

hidden in static images using 2D Convolutional Neural Networks, I find myself at an exciting juncture in my PhD journey - diving into the spatio-temporal context. The shift from analyzing still frames to understanding the intricate sequences of video data is not just a step forward in complexity, but a step towards the realm brimming with untapped potential and unexplored challenges. My exploration into this domain is driven by a simple yet profound realization- our world is not static. It is a dynamic tapestry where each moment is a continuation of the last, and a stroy unfolding in time.

In my previous work, 2DCNNs served as a powerful tool, adept at capturing spatial hierarchies and patterns within images, exploring the intricate relationship between pixels, and encoding subtle patterns via edges and corners. However, as I delve into video data, I find myself in need of a more sophisticated ally - one capable of understanding not just the spatial but also the temporal nuances of visual data. This is exactly where 3D Convolutional Neural Networks (3D CNNs) enter the picture.

My shift to 3D CNNs is more than just an academic interest; it is a journey towards a deeper understanding of how we can enable machines to perceive and interpret the world in its full dynamism, our stochasticity, and uncertainties even in seconds of actions- much like we do. Every video clip is a symphony of motions, emotions, and interactions, with multilayers of subtle meanings- and of course 3D CNNs promise to be the key to deciphering these complex sequences. As I embark on this journey, I am not just to expand the boundaries of my knowledge, but also to contribute to the broader field of computer vision, pushing towards systems that can understand and interact with the world in richer, more meaningful ways.

In subsequent blog posts, I invite you with me through the explorations of 3DCNNs - from the core concepts that distinguish them from their 2D counterparts to the intricate challenges and learning curves I have encountered while applying them to video data. Whether you are a seasoned expert in the field, a beginner, a grad student, or a curious onlooker, I hope to offer insights and experiences that resonate with this domain.

**Understanding CNNs**: Convolutional Neural Networks (CNNs) have been the cornerstone of image analysis in computer vision for years. Traditional 2D CNNs are adept at processing static imageslearning spatial hierarchies and patterns by applying filters that capture various aspects of the image, such as edges, textures, and shapes. If you would like to find out more about 2D CNN, please refer to my slides and labs here

**Limitation in Capturing Temporal Information**: While 2D CNNs excel in spatial understanding, they fall short in comprehending temporal dynamics, which is crucial when dealing with video data. Videos are essentially sequences of frames, where each frame is tied to its predecessor and successor, creating a temporal continuity that 2D CNNs cannot capture.

**Introduction to 3D CNNs**: This is where 3D Convolutional Neural Networks change the game. Unlike their 2D counterparts, 3D CNNs are designed to understand both spatial and temporal features. They achieve this by adding an additional dimensiontimeto the convolutional process.

**How 3D CNNs Work**: In a 3D CNN, the convolutional filters extend along three dimensionsheight, width, and depth (time). This allows the network to not only learn from the spatial content of each frame but also gain insights into the motion and changes occurring across frames. As a result, 3D CNNs can unravel the complex tapestry of actions and interactions in video sequences.

**Beyond Static Frames**: The ability of 3D CNNs to interpret time makes them incredibly powerful for a range of applications. This includes action recognition in videos, where understanding the sequence of movements is key, and medical imaging, where temporal changes in 3D scans can indicate crucial health information. In each of these areas, 3D CNNs offer a more comprehensive understanding by considering the evolution of visual data over time.

**Challenges and Opportunities**: The shift to 3D CNNs, however, is not without its challenges. The addition of the temporal dimension increases the computational complexity significantly. Additionally, training 3D CNNs requires not only larger datasets but also datasets that accurately represent temporal variations.

**Initial Exploration**: My journey into 3D CNNs began as an extension of my work with 2D CNNs, where I had focused on spatial feature extraction from static images. The transition to 3D CNNs marked a significant shift towards integrating the temporal dimension. My initial challenge lay in comprehending the intricacies of 3D convolutional layers understanding how they extend the spatial interpretation of 2D CNNs to include temporal relationships.

The architectural nuances of 3D CNNs, such as the incorporation of time as a third dimension in convolutional operations, presented both a conceptual and practical learning curve. This was not merely about adapting to a new technique but rethinking the approach to data representation and processing.

**Data Preprocessing and Management**: One of the most formidable challenges I faced was the preprocessing of video data. Unlike static images, video data comes with additional complexities like variable frame rates, diverse resolutions, and most crucially, a substantial increase in data volume. Developing an efficient preprocessing pipeline that could handle such diversity and volume was paramount. This involved not only frame extraction and resizing but also temporal sampling strategies to capture relevant motion information without overburdening the computational process.

**Architectural Design and Computational Considerations**: Designing the architecture of a 3D CNN requires a delicate balance. The model had to be sophisticated enough to capture intricate temporal patterns without becoming computationally infeasible. This entailed an iterative process of model design, where each layer's parameters were carefully calibrated to maximize learning while minimizing computational costs. The extended training durations and heightened resource demands of 3D CNNs necessitated a more strategic approach, leveraging distributed computing and optimizing algorithms for efficiency.

**Performance Optimization**: In pursuit of optimal performance, I explored a variety of architectural tweaks and parameter adjustments. Strategies such as modifying stride and kernel size in convolutional layers, and incorporating advanced techniques like transfer learning, played a crucial role in surmounting the limitations imposed by the sheer scale of video data.

**Combating Overfitting**: The increased parameter count in 3D CNNs heightened the risk of overfitting. To mitigate this, I implemented a combination of regularization strategies, data augmentation techniques, and dropout layers. These measures were critical in ensuring that the model generalized well, rather than merely memorizing the training data.

Working with 3D CNNs reinforced the virtue of patience. The field of 3D convolutional analysis is still burgeoning, with much left to explore and understand. Navigating this terrain often required an iterative, trial-and-error approach, underscoring the importance of resilience in research.

**Harnessing Technological Growth**: As computational capabilities continue to advance and datasets grow both in size and complexity, the potential applications of 3D CNNs are set to broaden significantly. I am particularly intrigued by the prospects in domains like augmented reality, where interpreting both spatial and temporal information is key to creating immersive experiences.

**Ongoing Exploration**: My foray into 3D CNNs is an ongoing chapter in my academic journey. I'm keen to delve deeper into novel architectures and apply these models across a wider spectrum of applications. The ultimate goal is to push the frontiers of computer vision and contribute to the development of systems that can interact with our dynamic world more intelligently and intuitively.

**Intuition, Mathematics, Code: A Technical Deep Dive into 3D CNNs**

In my upcoming blog post, we'll take a technical deep dive into the world of 3D Convolutional Neural Networks. I'll unravel the intuition behind these sophisticated models, illuminating how they interpret not just the visual cues in static images but also the temporal dynamics in videos. We'll delve into the mathematics that underpins these networks, demystifying how they learn and process information across both space and time. Expect to see detailed discussions on model architecture, accompanied by snippets of code that bring these concepts to life. Whether you're keen on understanding the nuts and bolts of 3D convolution operations or interested in the practical aspects of implementing these models in PyTorch, the next post promises to be a treasure trove of insights.

From discussing the nuances of kernel size and stride in 3D convolutions to exploring strategies for optimizing network performance, we will cover a spectrum of topics that will cater to both beginners and seasoned practitioners in the field. The goal is to provide you with a comprehensive understanding of 3D CNNs that balance theoretical depth with practical applicability. So, stay tuned for an enriching journey into the technical heart of 3D CNNs!

To take a sneek peak into an experimental 3D CNN Architecture please check here

]]>In Part 1 of this series on camera calibration, we laid the groundwork by exploring the fundamental principles that govern how cameras translate the 3D world into a 2D image. We delved into camera models and the intrinsic and extrinsic parameters that play a vital role in this transformation. But that was merely scratching the surface.

In this second instalment, I'm going to broaden the scope significantly. We'll venture into the critical importance of camera calibration across various real-world applicationsfrom robotics to autonomous vehicles and even the arts. We'll also uncover the lens distortions that could potentially mar your images and then look at the mathematical equations behind them.

So if you've ever wondered how self-driving cars make sense of their environment, how augmented reality applications manage to superimpose digital elements so naturally, or even questioned the mechanics behind your DSLR's crisp photos, you're in for a treat.

The importance of camera calibration extends far beyond academic interestit plays a critical role in various real-world applications. I'll investigate why camera calibration is indispensable in key areas in this section.

In robotics, precision is not just a nice-to-have - it is the name of the game. Whether the robots are on bustling factory floors or those designed to help people in their homes- these machines will have to 'know' what is around them and where exactly it is located. This is even more true for robots rocking machine perception tech, which allows them to interpret and make sense of their surrounding. Getting the camera calibration right in settings like these is often a big deal.

Take a factory assembly line, for example. Robots are often kitted out with cameras and machine perception algorithms to identify parts or objects. Mess up the camera calibration, and you're in for a world of trouble. Imagine a robot misjudging the position of a piece it's supposed to pick. That's the kind of error that can start a chain reaction of problems. This is not just about assembly lines or specific tasks, either. Suppose a robot is to pick up an item and place it somewhere specific - a well-calibrated camera ensures that the robot's actions are spot-on with what it is seeing. This is not limited to the task at hand but to the robot's ability to navigate more complex situations. Think about it: a finely calibrated camera can act like a robot's "sixth sense", allowing for on-the-fly adjustments during the job.

To sum it up, nailing camera calibration in robotics and automation isn't just a good practice; it is a must. Whether for aiding complex tasks or helping a robot safely navigate an unstructured environment, getting the camera settings right can either make or break the whole operation.

We are on the brink of a game-changer - self-driving cars are about to become a common sight on our roads. But let us not forget, the tech making this possible is anything but simple. At the core, we have advanced vision systems that let these vehicles 'see the world around them. However, is not always enough; these systems must also be spot-on when interpreting this visual data for real-time decision-making. This is precisely where camera calibration comes in and becomes a critical piece of the puzzle.

For a minute, think about the challenges of driving autonomously. Cars must navigate a world filled with other vehicles, pedestrians, and other unpredictable elements. Oh, get the camera calibration wrong, and you are asking for trouble. Results of miscalibration? We are discussing potentially misjudging the distance to the car in front, which could translate to insufficient time to brake or even a full-on collision.

Here is the kicker: autonomous cars rely on many machine vision tasks - such as detecting obstacles, understanding road signs, or even interpreting road markings. Many of these cars would require more than one camera, each serving a specific purpose. Hence, calibrating each camera is not a one-off job- it is about ensuring all these cameras work harmoniously.

Okay, let's talk AR and VR. These are realms where the line between the digital and the real world gets blurry. Whether overlaying virtual furniture in your real living room or immersing yourself in a completely digital world, the experience has to feel real. That's why camera calibration is a big deal in AR and VR tech.

Think about it. You put on a VR headset and step into a virtual world. You move your head, and the perspective changes perfectly in sync. That's not magicit's precise calibration. If the camera's off even by a little, you might start to feel motion sickness or have a subpar experience. That's the last thing you want when battling space pirates or exploring a virtual museum.

Now, switch gears to AR. Imagine using an app on your smartphone to visualize how a new sofa would look in your living room. The app has to blend digital objects with the real world smoothly. If the camera calibration is off, that sofa might look like it's floating in mid-air or sinking into the floor. Not the best way to make a buying decision, right?

And let's not forget about more advanced applications. For example, getting the camera calibration wrong could be life and death in medical AR. Surgeons often use AR tech for guided procedures. In scenarios like this, the calibration needs to be absolutely spot-on for accurate guidance and successful outcomes.

So, all in all, whether you're gaming, shopping, or even performing surgery, camera calibration in AR and VR isn't just about enhancing the experienceit's about making it possible in the first place.

Let's get into film and photography, where camera calibration isn't just about the techit's also about the art. In settings that demand a heavy dose of scientific rigor, like wildlife documentaries or high-speed sports action, getting your camera settings right is non-negotiable. Picture this: you're shooting a documentary on migratory birds. A well-calibrated camera lets you capture beautiful shots and accurate data on how fast and high these birds fly. That's adding a layer of scientific credibility to your storytelling.

But hey, it's not all about the numbers. Camera calibration also plays a starring role in the artistic side of things. Take landscape photography, for instance. You want those mountain ranges and valleys to look as majestic in the photo as they do in real life. A calibrated camera ensures that the proportions and spatial relationships within the frame are just right, enhancing your shots' emotional impact and narrative quality.

And let's not forget the controlled chaos of a studio setting. Calibration is your best friend, whether you're doing product photography, snapping high-fashion looks, or capturing fine art reproductions. In essence, camera calibration in film and photography is more than a behind-the-scenes technicality; it's a linchpin that can elevate your work from good to great. It's not just about getting the colour balance or the focus right; it's about capturing the subject's soul, be it a fast-paced sporting event or a still life. When your camera is finely tuned, your work speaks volumesconveying scientific facts or evoking deep emotions.

Regarding camera calibration, addressing distortions is not just a side quest - it is the main objective. Distortions are discrepancies between the captured image and the real-world scene, affecting the accuracy of the camera's representations. Distortions can be characterised as deviations from the ideal imaging model, where rays from a single point in three-dimensional space converge at a single point on the imaging sensor. Numerous factors contribute to distortions, including lens shape, refractive index variations, and manufacturing imperfections. These distortions have various types, each with a mathematical model and correction method. Understanding these distortions is pivotal for calibrating the camera to achieve high accuracy in multiple applications.

** 1. Barrel Distortions:** Barrel distortion is a sub-type of radial distortion, where straight lines appear to curve outward from the centre of the image.

The image magnification decreases with distance from the optical axis. This causes straight lines near the edge of the field to bend inward, resembling the shape of a barrel. This type of distortion is common in wide-angle lenses.

** 2. Pincushion Distortion:** Conversely, image magnification increases with the distance from the optical axis in pincushion distortion. The result is that straight lines bend outward from the centre, akin to a pincushion

Mathematically, a unified model can represent both barrel and pincushion distortions, often employing higher-order polynomials, which is particularly useful when working with more complicated lens systems. The general formula is:

$$\begin{align*} x' &= x\left(1 + k_1 r^2 + k_2 r^4 + \ldots\right) \\ y' &= y\left(1 + k_1 r^2 + k_2 r^4 + \ldots\right) \end{align*}$$

Here, (** x**,

$$r = \sqrt{x^2 + y^2}$$

In this general model:

A positive

will produce pincushion distortion, as lines will curve outward.*k*_{1}A negative ***k

_{1}*** will produce barrel distortion, where lines curve inward towards the centre.Higher-order terms like ***k

_{2}*** allow for more complex distortion patterns, which might be observed in higher-end or more flawed lens systems.

The model is extendable to as many terms as necessary, but in practice, most systems are sufficiently modelled using just ** k_{1}** and sometimes

These distortions occur when the lens and the imaging plane are not parallel. Tangential distortions usually shift the image in a direction orthogonal to the radial distortions and can make the image look tilted or skewed. While radial distortions affect the image in a radially outward direction from the centre, tangential distortions act orthogonally to them. This means that they can make the image appear tilted or skewed, effectively moving the distorted image points horizontally and vertically in a way unrelated to their distance from the optical axis.

Mathematically, tangential distortion can be expressed as:

$$\begin{align*} x' &= x + \left(2p_1 xy + p_2 (r^2 + 2x^2)\right) \\ y' &= y + \left(p_1 (r^2 + 2y^2) + 2p_2 xy\right) \end{align*}$$

Here, ** x'** and

The coefficients ** p_{1}** and

In this second instalment, we've delved deeper into the reasons for camera calibration across various industries, touched on different types of distortions, and hinted at the mathematics involved. However, we've only scratched the surface. In Part 3, we'll dive into the heart of the mathematics that makes accurate camera calibration possible. From optimization problems to factoring in distortions, we'll explore how all these elements combine to create a robust camera model. Stay tuned!

For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:

[1] Codes for distortion plots

[2] Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman

]]>Sharing educational resources has always been a way to democratise knowledge. Whether you're a student who took the course and wants to revisit the material or someone who's just getting started with Python, these resources will serve as a comprehensive guide.

To give you a taste of what's to come, week one was all about laying the foundation:

Translators, Compilers, and Assemblers: An overview of the tools that make coding in Python possible and how they work.

Each week, I'll post the slides corresponding to that week's topics. Occasionally, I will also share the recorded lectures for those who prefer a more interactive learning experience.

Whether you're a beginner in Python or looking to refresh your knowledge, stay tuned for weekly updates that will take you from the basics to more advanced topics. Don't forget to check back each week for new materials, and happy learning!

]]>Imagine you're taking a photo of a building with your smartphone. You might notice that the lines of the building don't appear as straight as they do in real life, or the proportions seem slightly off. These are distortions, and discrepancies between the real-world objects and their representations in the image. Such distortions often occur due to the inherent limitations of camera lenses and sensors as they attempt to map a 3D world onto a 2D plane.

Camera calibration is the technique used to understand and correct these distortions. It's a fundamental process for achieving more accurate visual representations, especially in applications like augmented reality, robotics, and 3D reconstruction. In this first part of our series on camera calibration, we'll explore the foundational concepts and models that serve as the backbone of this technique. We'll delve into the intrinsic and extrinsic parameters that influence how a camera captures an image and discuss how these parameters can be determined to correct distortions. By the end of this post, you'll have a solid understanding of the principles behind camera calibration and its importance in various domains.

To understand the intricacies of camera imaging, it's useful to connect the dots with real-world applications. Take the example of a self-driving car, which relies on its camera to accurately gauge the dimensions and distances of surrounding elements like pedestrians, other vehicles, and road signs. Just as understanding the human eye's perception aids in comprehending our interaction with the 3D world, grasping the mechanics of a camera model enhances the precision of such measurements in automated systems. To unpack this further, let's engage in a thought experiment: envision a simple setup (See figure 1) where a small barrier with a pinhole is placed between a 3D object and a film. Light rays from the object pass through the pinhole to create an image on the film. This basic mechanism serves as the cornerstone for what is known as the pinhole camera model, a foundational concept that allows us to fine-tune the way cameras, like the one in a self-driving car, interpret the world.

In the pinhole model, consider a 3D coordinate system defined by unit vectors ** i**,

To relate ** P** and

$$\frac{x}{f} = \frac{X}{Z} \quad \text{and} \quad \frac{y}{f} = \frac{Y}{Z}$$

Solving for ** x** and

$$\begin{align*} x &= f \left( \frac{X}{Z} \right), \\ y &= f \left( \frac{Y}{Z} \right). \end{align*}$$

Here, f represents the focal length of the Camera.

While the pinhole model gives us an idealized perspective of image formation, real-world cameras use lenses to focus light. Lenses introduce additional complexities due to their shape, material, and how they bend light rays. These models account for additional factors like focal length, aperture, and lens distortions. Let's explore lens models to understand these intricacies.

Like the pinhole model, lens models use a 3D coordinate system defined by ** i**,

In lens models, we need to account for distortions introduced by the lens. These distortions are typically represented by **d _{x}** and

$$\begin{align*} x &= f \left( \frac{X}{Z} \right) + d_x, \\ y &= f \left( \frac{Y}{Z} \right) + d_y. \end{align*}$$

In these equations, dx and dy are functions of ** X**,

So far, we've discussed the basic models that describe how cameras work and how they capture the 3D world onto a 2D plane. These models give us a high-level view but are generalized and often idealized. In practice, each camera has its unique characteristics that influence how it captures an image. These characteristics are captured by what are known as **intrinsic** and **extrinsic** parameters. While intrinsic parameters deal with the camera's own 'personality' or 'DNA' extrinsic parameters describe how the camera is positioned in space. Together, they offer a complete picture of a camera's behaviour, which is crucial for applications like 3D reconstruction, augmented reality, and robotics.

After Understanding the broad overview of intrinsic and extrinsic parameters, let's zoom in on the intrinsic parameters first. These parameters are unique to each camera and provide insights into how it captures images. While these parameters are generally considered constants for a specific camera, it is important to note that they can sometimes change. For instance, in cameras with variable focal lengths or adjustable sensors, intrinsic parameters can vary.

#### Optical Axis

The optical axis is essentially the line along which light travels into the camera to hit the sensor. In the idealized pinhole and lens models, it's the line that passes through the aperture (or lens centre) and intersects the image plane. It serves as a reference line for other measurements and parameters.

**Focal Length**(*f*): This is the distance between the lens and the image sensor. Knowing the focal length is crucial for estimating the distances and sizes of objects in images. It's also a key factor in determining the field of view and is usually represented in pixels.

$$f = \alpha \times \text{sensor size} ,$$

$$\text{Here}, \alpha \space \text{is a constant that relates the physical sensor size to the size in pixels}$$

**Principal Point**((**c**,_{x}**c**))_{y}**:**This is the point on the image plane where the optical axis intersects, it often lies near the centre of the image. it is crucial for tasks like image alignment and panorama stitching.

$$\begin{align*} c_x &= \frac{Image Width}{2},\\ \\ c_y &= \frac{Image Height}{2}. \end{align*}$$

**Skew Coefficient****(s)**: This parameter is responsible for any angle between the x and y pixel axes of the image plane. It is rarely encountered in modern-day cameras.

$$s = 0 \quad \text{(usually)}$$

The intrinsic matrix denoted by ** K** consolidates these parameters:

$$K = \begin{pmatrix} f_x & s & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix}$$

Although intrinsic parameters like the focal length and principal point are often treated as constants, especially in fixed or pre-calibrated camera setups, they can often change based on specific hardware configurations. For example, the focal length will vary in cameras with zoom capabilities. Therefore, in such special cases, recalibration may be necessary.

Cameras with zoom capabilities introduce an additional layer of complexity to the calibration process. While Zooming allows for better framing or focus on specific areas, it also changes intrinsic parameters like the focal length. This section will explore how to handle calibration in scenarios involving zoom.

*Calibration at Specific Zoom Levels*

When you calibrate a camera at a particular zoom level, the resulting intrinsic parameters are only accurate for that setting. If you continue to record or capture images at the same zoom level, these calibration parameters will remain valid.

$$K_{\text{zoom}} = \begin{pmatrix} f_{\text{zoom}} & s & c_x \\ 0 & f_{\text{zoom}} & c_y \\ 0 & 0 & 1 \end{pmatrix}$$

Here, ** K_{zoom}** and

If you adjust the zoom after calibration, you have two main options:

**Dynamic Calibration**: Recalibrate the camera every time you change the zoom. This approach provides the highest accuracy but may be impractical for real-time applications due to computational costs.**.Parameter Interpolation**: If you've calibrated the camera at multiple zoom levels, you can interpolate the intrinsic parameters for new zoom settings. This is computationally efficient but might sacrifice some accuracy.

Understanding intrinsic parameters is key for various computer vision tasks. For instance, in augmented reality, an accurate intrinsic matrix can drastically improve the realism and alignment of virtual objects in real-world scenes.

While intrinsic parameters define a camera's 'personality' by capturing its internal characteristics, extrinsic parameters tell the 'story' of the camera's interaction with the external world. These parameters, specifically the rotation matrix ** R** and the translation vector

**Rotation Matrix (**This*R*):matrix gives us the orientation of the camera in the world coordinate system. Specifically, it transforms coordinates from the world frame to the camera frame. For instance, if a drone equipped with a camera needs to align itself to capture a specific scene, the rotation matrix helps in determining the orientation the drone must assume.*3x3*The rotation matrix is usually denoted as

and takes the form:*R*

$$R = \begin{pmatrix} r_{11} & r_{12} & r_{13} \\ r_{21} & r_{22} & r_{23} \\ r_{31} & r_{32} & r_{33} \end{pmatrix}$$

The elements **r _{11}**,

**Translation Vector (T):**Thisvector represents the position of the camera's optical centre in the world coordinate system. The translation vector is generally represented as:*3x1*

$$T = \begin{pmatrix} t_x \\ t_y \\ t_z \end{pmatrix}$$

The elements **t _{x}**,

Computing R and T gives you a complete picture of the camera's pose in the world, including both orientation and position.

Together, the rotation matrix and the translation vector can be combined into a single ** 3x4** matrix, often represented as

$$[R|T] = \begin{pmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \end{pmatrix}$$

We've covered a lot of ground in this first instalment of our series on camera calibration, unravelling the complexities behind camera models and the intrinsic and extrinsic parameters that define them. These foundational concepts are the building blocks for more advanced topics like distortion correction, 3D reconstruction, and multi-camera setups. In the next part of this series, we'll go beyond the basics to explore the practical reasons for camera calibration, the types of distortions you might encounter, and the mathematical and technical approaches to correct them. So, stay tuned for more insights into the fascinating world of camera calibration!

For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:

[1] Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman

[2] Stanford CS231A: Camera Models

For those looking to delve deeper into the topics covered in this blog post, the following resources are highly recommended:

**Books:**Digital Image Warping by George Wolberg

Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman

Computer Vision: Algorithms and Applications by Richard Szeliski

3D Computer Vision: Efficient Methods and Applications by Christian Whler

**Papers:**A Four-step Camera Calibration Procedure with Implicit Image Correction by Jean-Yves Bouguet

Flexible Camera Calibration By Viewing a Plane From Unknown Orientations by Zhengyou Zhang

By exploring these resources, you'll gain a more comprehensive understanding of camera calibration, enabling you to tackle more complex problems and applications.

]]>The song "IBA", by Pastor Nathenial Bassey, Dusin Oyekan, Dasola Akinbule is deeply inspiring and yet again I am brought to the realisation of how glorious you are lord, you are graciously wonderful, there is not a thought on earth nor in heaven that can fathom how great you are. No thought, whether terrestrial or celestial, can adequately encapsulate your grandeur. In the face of this, I find myself humbled, whispering "IBA, atofarati" in reverent submission.

Musical compositions like "IBA" possess a remarkable ability to crystallize complex emotions and thoughts. They serve both as an articulation of, and a channel for, our ineffable sense of awe and reverence.

I recognise your unsearchable greatness, Ahayah!

]]>Imagine you're a wine connoisseur with a penchant for data. You've collected a vast dataset that includes variables like acidity, sugar content, and alcohol level for hundreds of wine samples. You're interested in distinguishing wines based on these characteristics, but you soon realize that visualizing and analyzing multi-dimensional data is like trying to taste a wine from a sealed bottlenear impossible.

This is where the magic of Principal Component Analysis, or PCA for short, kicks in. Think of PCA as your data's personal stylist, helping your dataset shed unnecessary dimensions while keeping its essence intact. Whether you're dissecting the nuances of wine characteristics or diving into the depths of machine learning algorithms, PCA is your go-to for simplifying things without losing the crux of the data.

Let's assume you given a 2D dataset ** X** of size

$$\Sigma = \frac{1}{n} \sum_{i=1}^{n} (x_i - \mu)(x_i - \mu)^T$$

Here x_{ij} represents the i^{th } row in ** X** (a 2D point), and

After calculating ** Next, we perform eigen-decomposition of the covariance matrix. This allows for finding its eigenvalues and eigenvectors. The eigen decomposition of **** can be represented as:**

$$\Sigma = Q \Lambda Q^{-1}$$

Here ** Q** is a matrix where each column is an eigenvector of

Let's say you have now found k eigenvectors (principal components) that you would like to use for dimensionality reduction. These ** k** eigenvectors form a

The projected data ** Y** , in the new

$$Y = X \cdot P$$

In this equation, ** X** is the original

Here's a Python code snippet to get you started with PCA:

**Data Generation:**First let's generate some synthetic data with 100 samples in a 2D feature space between x and y coordinates. This sort of mimics the real-world data where features are often correlated.`import numpy as np import matplotlib.pyplot as plt # Generate synthetic 2D data np.random.seed(0) x = np.random.normal(0, 10, 100) # x-coordinates y = 2 * x + np.random.normal(0, 5, 100) # y-coordinates data = np.column_stack((x, y))`

**Data Visualisation:**Let's visualise what our generated data looks like.

`# Plot the synthetic dataplt.figure(figsize=(8, 6))plt.scatter(data[:, 0], data[:, 1], label='Original Data')plt.xlabel('X')plt.ylabel('Y')plt.title('Synthetic 2D Data')plt.grid(True)plt.legend()plt.show()`

**Data Centering:**Before you can apply PCA, it is essential to center the data around the origin. This ensures that the first principal component describes the direction for maximum variance.

`# Calculate the mean of the datamean_data = np.mean(data, axis=0)# Center the data by subtracting the meancentered_data = data - mean_data`

**The covariance Matrix calculation:**The covariance matrix captures the internal structure of the data. It is the basis for identifying the principal components.

`# Calculate the covariance matrixcov_matrix = np.cov(centered_data, rowvar=False)print('Covariance of Data',cov_matrix)`

$$\text{Covariance Matrix} = \begin{pmatrix} 102.61 & 211.10 \\ 211.10 & 461.01 \end{pmatrix}$$

Code output:

`Covariance of Data [[102.60874942 211.10203024] [211.10203024 461.00685553]]`

**Eigen Decomposition:**Here we calculate the eigenvalues and eigenvectors of the covariance matrix. The eigenvectors point in the direction of maximum variance, and the eigenvalues indicate the magnitude of this variance - since the first principal component is the eigenvector associated with the largest eigenvalue of the data's covariance matrix. This eigenvector identifies the direction along which the dataset varies the most.

`# Calculate the eigenvalues and eigenvectors of the covariance matrixeig_values, eig_vectors = np.linalg.eig(cov_matrix)print('Eigenvalues:', eig_values, '\n', 'Eigenvectors: ', eig_vectors)`

$$\text{Eigenvalues} = \left[ 4.90, 558.71 \right]$$

$$\text{Eigenvectors} = \begin{pmatrix} -0.91 & -0.42 \\ 0.42 & -0.91 \end{pmatrix}$$

**Projection and Visualization:**Our data is then projected onto the principal component. The original data with the principal component, and the projected data are then plotted together to further emphasize the dimensionality reduction.

`# Choose the eigenvector corresponding to the largest eigenvalue (Principal Component)principal_component = eig_vectors[:, np.argmax(eig_values)]# Project data onto the principal componentprojected_data = np.dot(centered_data, principal_component)# Re-plot the original data and its projection with the principal component as a red arrow# Plot the original data and its projectionplt.figure(figsize=(10, 8))plt.scatter(data[:, 0], data[:, 1], alpha=0.5, label='Original Data')# Draw the principal component as a red arrowplt.arrow(mean_data[0], mean_data[1], principal_component[0]*20, principal_component[1]*20, head_width=2, head_length=2, fc='r', ec='r', label='Principal Component')# Plot the projected data as green pointsplt.scatter(mean_data[0] + projected_data * principal_component[0], mean_data[1] + projected_data * principal_component[1], alpha=0.5, color='g', label='Projected Data')plt.xlabel('X')plt.ylabel('Y')plt.title('Data and Principal Component')plt.grid(True)plt.legend()plt.show()`

Output:

There we gothe red arrow representing the principal component is now visible in the plot, along with the original data points and their projections (in green). The arrow points in the direction of the highest variance in the dataset, capturing the essence of the data in fewer dimensions.

You might have noticed that the red arrow, our principal component, points towards the bottom left. IS this supposed to happen? Absolutely, and here is why:

The direction of the principal component is calculated mathematically to capture the maximum variance in the synthetic dataset. This direction is defined by the eigenvector corresponding to the largest eigenvalue of the covariance.

Simply put, the principal components serve as a "line of best fit" for the multidimensional data It doesn't necessarily mean an alignment with the `x`

and `y`

axis but it captures the correlation between these dimensions. In this specific synthetic dataset, the principal component points towards the bottom left, indicating that as one variable decreases, the other tends to decrease as well, and vice-versa.

This is a crucial insight because it tells us not just about the spread of each variable but also about their relationship with each other. So, yes, the direction of the principal component is both intentional and informative.

In case you would like to run the full code use the replit window below:

Let's circle back to our wine example. You could use PCA to distinguish wines based on key characteristics. By reducing the dimensions, you can visualize clusters of similar wines and maybe even discover the perfect bottle for your next dinner party!

**Data Visualization**: High-dimensional biological data, stock market trends, etc.**Noise Reduction**: Image processing and audio signal processing.**Natural Language Processing**: Feature extraction from text data.

**Kernel PCA**: For when linear PCA isn't enough.**Sparse PCA**: When you need a sparse representation.**Integrating with Deep Learning**: Using PCA for better initialization of neural networks.

For those who wish to delve deeper into PCA, here are some textbook references:

"Pattern Recognition and Machine Learning" by Christopher M. Bishop

"The Elements of Statistical Learning" by Trevor Hastie, Robert Tibshirani, and Jerome Friedman

"Machine Learning: A Probabilistic Perspective" by Kevin P. Murphy

In the realm of data science, PCA ages like a well-kept Bordeauxit only gets richer and more valuable as you delve deeper. This versatile approach is more than just a mathematical trick; it's a lens that brings clarity to your analytical endeavors. So whether you're a wine lover seeking the perfect blend, a data scientist sifting through gigabytes, or a machine learning guru, mastering PCA is like adding a Swiss Army knife to your data analysis toolkit.

]]>This verse of the scripture reminds me of the inherent grace and beauty found in our relationship with the Creator. The imagery of being crowned amidst majestic mountains and vast seas brings forth feelings of awe, wonder, and profound gratitude. We are a unique creation, positioned just a little lower than the heavenly beings, yet bestowed with honour and glory. I'm thankful for this reminder of our divine connection, where the natural world stands as a testament to the Creator's grand design.

The mountains and seas serve not merely as a scenic backdrop but as a profound metaphor for our existence, filled with purpose and meaning. They inspire a sense of humility, yet simultaneously elevate our understanding of our special place in this magnificent creation. I am deeply grateful for the realization that I am part of this intricate and purposeful design, created with intention and love. It's a thought that fills me with thankfulness and inspires me to live a life that reflects this connection. How majestic indeed is His name in all the earth, and how profound is our connection to it all!

Thank you Ahayah!

]]>The canvas of my early years was set in the culturally diverse landscape of Nigeria. The socio-economic environment of my childhood was marked by austerity and a sense of frugality. My parents, resolute in their vision for their child, were fervent believers in the transformative power of education. They nurtured within me a dream that outstretched the limited purview of our financial means. Years of relentless sacrifice and unwavering determination led my parents to provide me with an invaluable opportunity - a quality education. This served as an egalitarian launchpad where I, along with my peers from various economic strata, could test our mettle. Our schools, the magnificent fortresses of knowledge, emerged as arenas where economic disparity was beautifully blurred into oblivion. We all found ourselves on the same starting line, gearing up for the race of life.

Now, for the biting twist of irony. While we were all equipped with the same opportunity, the outcome was a completely different story. Consider this: We are all given a violin and a piece of music. While the violin and music are the same, the symphony that each individual produces varies drastically. Some, with practice, could create a melody that moves hearts; others might only manage a cacophony. What a splendid testimony to the sardonic wit of life's realities!

With this crystal-clear reality, I found myself standing at life's crossroads. I had two options to channel my energy into unyielding hard work and diligence, crafting a narrative of success, or to let my circumstances dictate my future, whiling away my life on the sidelines. The melody of meritocracy resonated with me. It echoed the profound truth that the world values individuals not by their ancestral wealth but by the strength of their efforts.

At this juncture, allow me to invite sarcasm back onto our stage. Picture a world where, regardless of effort, skill, or prowess, everyone reaches the finish line simultaneously. Consider an academic setting where the diligent scholar and the habitual procrastinator are both rewarded with the same grade. They call it equality; I call it a comedic tragedy!

From a little child to a person carving out their destiny, my narrative was an intense adventure. Given an extraordinary opportunity, I could have taken any path. But I chose the road less travelled. I decided to rise above my circumstances and use my opportunity to craft a trajectory of success. This narrative was never about equal outcomes, but about a race where the winner wasn't preordained. The medals weren't bestowed freely; they were meticulously earned, each gleaming symbol a testament to the sweat of hard work and the unfaltering spirit of meritocracy.

As my journey progressed, I found myself traversing a path littered with challenges and obstacles. Each hurdle, however, was an opportunity in disguise, a chance to prove my worth, test my resolve, and learn valuable lessons. Hours transformed into days, and days into years as I relentlessly pursued excellence, often at the expense of social gatherings and leisure. The culmination of these years of toil and perseverance resulted in a journey defined by meritocratic success.

Taking a step back, the larger narrative unveils itself, posing a series of philosophical questions. What is the true essence of equality in our society? Is it merely about presenting equal opportunities, or does it extend to ensuring equal outcomes? In our quest for equality, where do we demarcate the boundary between rewarding merit and fostering mediocrity?

To answer these questions, we revisit the essence of 'About Equality and Equity.' The narrative of my life echoes the sentiment that creating equal opportunities forms the cornerstone of a just society. The outcomes, however, shouldn't be identical trophies, but a reflection of our individual efforts, our steadfast determination, and our merits.

As we conclude this philosophical exploration into the realms of equality, equity, and meritocracy, let's cherish the ironic humor life unfurls before us. Life, in all its sardonic wisdom, offers each of us the opportunity to run our unique race. Amid this grand orchestration of humanity, let's value the distinctiveness of each journey and the varying pathways to success. After all, a world where everyone ends up the same would be dreadfully monotonous, don't you agree?

]]>Welcome to Day 2 of the Blind 75 Challenge! Today I will be tackling the problem of finding the maximum profit by buying and selling stock once, a common problem in algorithm interviews and coding competitions. In this blogpost, I will explore a simple and efficient algorithm to solve this problem in Python, using only one pass/iteration through the array of stock prices.

You are given an array `prices`

where `prices[i]`

is the price of a given stock on the `ith`

day.You want to maximize your profit by choosing a *single day* to buy one stock and choosing a *different day in the future* to sell that stock.Return the maximum profit you can achieve from this transaction. If you cannot achieve any profit, return 0.

In this question, we are given an array of stock prices, where each element in the array represents the price in a particular day, and the `ith`

index location of a day of the stock price corresponds the the `ith`

day. We are expected to find a solution that will provide the maximum profit. The maximum profit here is defined as the largest difference between the largest positive number between the selling price and the buying price. As it is we would want to maximize the profit by buying the stock on one day and selling it on a different day in the future.

For example given the array below

`[1, 2, 3, 7, 4, 3]`

The maximum profit would be `6`

. Since the minimum price in the array is `1`

and the maximum price is `7`

(which comes at a later day).Again, the task is to find the solution that gets the maximum profit from the array.

**Naive solution**One possible approach for finding the maximum profit by buying and selling stock is to first find the minimum and maximum values in the array and then calculate the difference between them. This can be implemented as follows:

`minimum_price = min(input_list)maximum_[price = max(input_list)`

Then get the maximum profit by finding the difference between the maximum price and minimum price.

`maximum_profit = maximum price -minimum price`

While finding the minimum and maximum values in the array and subtracting them to get the maximum profit might work in some cases, it is not a correct solution in all cases. This approach is not always correct as it fails to consider cases where buying the stock on a day preceding the selling day would result in a greater profit.

Consider the following example:

`[3, 2, 6, 5, 0, 3]`

If we simply find the minimum value (0) and the maximum value (6), we would get a profit of 6 - 0 = 6, which is incorrect. The correct maximum profit that can be made in this case is 6 - 2 = 4, by buying the stock on day 2 (price 2) and selling it on day 3 (price 6). Since you can only buy on a day preceding the selling day. Therefore, finding the minimum and maximum values in the array and subtracting them is not a correct solution for this problem. Instead, we need to use an algorithm that finds the maximum profit that can be made by buying and selling the stock once.

**Using One-Pass Algorithm**

To overcome the limitations of the naive approach, a one-pass algorithm can be used. This algorithm processes each element of the data structure only once and keeps track of the minimum price seen so far and the maximum profit that can be made from selling the stock at the current price.

Here are the steps for implementing the one-pass algorithm:

- First check if the list is empty. If empty return 0 as maximum profit.
`if not prices: return 0`

Initialize the minimum price to the first element in the array.

`maximum_profit = 0`

Traverse through the array.

`for price in input_list:`

Check if the current price is lower than the minimum price.

`if price < minimum_price:`

If it is, update the minimum price (since no profit can be made from a lower price.

`minimum_price = price`

Else calculate the profit that can be made by selling the stock at the current price. This is the difference between the current price and the minimum price so far.

`else: profit = price - minimum_price`

Finally, compare the current profit with the maximum profit seen so far and update the profit if the current profit is greater.

`if profit > maximum_profit_seen: maximum_profit_seen = profit`

Return the maximum profit obtained.```pythonreturn maximum_profit_seen

Using this algorithm, we can find the maximum profit that can be made by buying and selling the stock once, taking into account the constraint that the buying day must precede the selling day.

`def maximum_profit_buy(input_list: list): # Check if the input list is empty if len (input_list) == 0: return 0 # Initialize the minimum price and maximum profit seen so far minimum_price = input_list[0] maximum_profit_seen = 0 # Traverse through the input list for price in input_list: # Update the minimum price seen so far if price < minimum_price: minimum_price = price else: # Calculate the profit that can be made by selling at the current price profit = price - minimum_price # Update the maximum profit seen so far if the current profit is greater if profit > maximum_profit_seen: maximum_profit_seen = profit # Return the maximum profit seen so far return maximum_profit_seen`

Let's test the `maximum_profit_buy`

function:

`print(maximum_profit_buy([7,6,4,3,1])) # Expected output: 0print (maximum_profit_buy ([1, 2, 3, 7, 4, 3])) # Expected Output 6`

The First test case represents the array `[7,6,4,3,1]`

, where the stock price decreases every day. In this case, no profit can be made, so the expected output is `0`

. For the second test case, we have an array `[1, 2, 3, 7, 4, 3]`

and the maximum profit that can be made by buying stock on `day 1`

`price 1`

is and selling it on `day 4`

`price 7`

is 6 which is the expected output.

The function has a time complexity of O(n), where n is the length if the input array. This is so since we need to iterate through the array only once. The space complexity is O(1) since we only use a constant amount of extra space to store the minimum price seen so far and the maximum profit.

The problem of finding the maximum profit by buying and selling a stock once is a common problem in coding interviews and competitions. It can also be used in finance and economics to analyze the performance of stocks and investments.

In this blog post, we explored a simple and efficient algorithm to solve the problem of finding the maximum profit that can be made by buying and selling a stock once. By using the one-pass approach and keeping track of the minimum price seen so far and the maximum profit that can be made by selling the stock at the current price, we can solve this problem in O(n) time complexity, where n is the length of the input array.

]]>Given an array of integers `num`

and an integer `target`

, return indices of the two numbers such that they add up to `target`

.You may assume that each input would have exactly one solution and you may not use the same element twice.You can assume that the given input array is not sorted.

The two-sum problem as it is widely called is a classic coding challenge that requires finding two integers in a given list that add up to a target value. The problem is often presented in different technical contexts, for example in algorithmic design, data structures, and optimization or even in the form of interview questions for most software engineering positions.

Now, to the main thing, this problem requires us to find two integers provided they are present in the given list that add up to a given target value. So for example if given `example_list`

and a `target_int`

below:

`example_list = [2, 3, 6, 9]target_int = 9`

You would be expected to come up with a code that returns the index location of `3`

and `6`

since these are the integers that add up to the target integer, such that your return value is: `[1, 2]`

**layman's thought**

When I first approached the Two-Sum problem, my initial thought was to find a way to map each number in the input list to its corresponding index location. I realized that this could be achieved by creating a table or dictionary that stores each number as a key and its corresponding index as the value. Such that for the list below:

`example_list = [2, 3, 6, 9]`

you would have a table similar to the one below:

Elements | Index Location |

2 | 0 |

3 | 1 |

6 | 2 |

9 | 3 |

Next, I iterated over the input list and for each number, I calculated the difference between that number and the target integer. I then checked if this difference exists in the input list (excluding the current number being checked). If the difference was found in the list, I used the table or dictionary I created earlier to find the index location of the number that makes up the target sum. This gave me the indices of the two numbers that add up to the target value.

In summary, my solution involved creating a table or dictionary that maps each number to its corresponding index location in the input list, and then iterating over the list to find the difference between each number and the target integer. I then used the table or dictionary to find the location of the number that makes up the target sum.

**Pythonic thoughts**

The table can be presented in Python as a hash table or dictionary data structure that maps each integer in the input list to its corresponding index location. This will enable us to access the index location of any integer in constant time. To do this, I created a dictionary variable that will store the integers as keys and index location as values. In Python the index location and elements can be gotten using the `enumerate`

method. This will return both the index location and element while iterating through a list:

`cache = {el: en for en, el in enumerate(input_list)}`

Next, iterate over the input list and for each integer, calculate the difference between that integer and the target(given):

` for en, int_1 in enumerate(input_list):`

Next, check if the difference exists in the input list (excluding the current integer being checked). This is achieved by looking it up in the dictionary. This search operation takes constant time.

` if (target_int - int_1) in input_list : if cache[target_int - int_1] != en:`

If this search operation is successful and the difference is found in the input list, use the dictionary to look up the index location of the integer that makes up the sum.

` return [cache[int_1], cache[target_int-int_1]]`

If no match is found, i.e. no two integers add up to the target value, we return an empty list.

` return []`

`def two_sum(input_list: list, target_int:int): # Create a hash table or dictionary that maps each integer to its index location cache = {el:en for en, el in enumerate(input_list)} # Iterate over the input list and check for the sum of two integers that equals the target value for en, int_1 in enumerate(input_list): if (target_int - int_1) in input_list: # Check that the two integers are not the same if cache[target_int - int_1] != en: # Return the indices of the two integers that add up to the target value return [cache[int_1], cache[target_int-int_1]] # Return an empty list if no two integers add up to the target value return []`

To test if the code works:

`# Example usagelist_1 = [2, 3, 6, 9]print(two_sum(list_1, 9)) # Output: [1, 2]`

The time complexity of this solution is O(n), where n is the length of the input list, and the space complexity is also O(n) since we need to store each integer in the input list as a key in the dictionary.

The two-sum problem is a common problem in computer science and is used in many real-world applications. For example, in financial applications, we can use the two-sum problem to find pairs of stocks that add up to a given target value. In image processing, we can use the two-sum problem to find pairs of pixels that add up to a given target colour.

In this blog post, we discussed the two-sum problem, the intuition behind solving it, and how to solve it using a dictionary/hash table. We saw that this problem has a time complexity of O(n) and a space complexity of O(n). We also discussed some use cases of the two-sum problem in real-world applications.

]]>2. "**Mao: The Unknown Story**" by Jung Chang - A historical exploration of Mao Zedong's life and impact on China.

3. "**The Irish Difference: A Tumultuous History of Irish breakup with Britain**" by Fergal Tobin - An in-depth examination of Irish culture, including its historical background and unique characteristics.

4. "**Multiple View Geometry in Computer Vision**" by Richard Hartley and Andrew Zisserman - I have read papers from both authors, fascinated by their works.

5. "**Bayesian Reasoning and Machine Learning**" by David Barber - the future is plagued with uncertainty and so is our physical world. Building an interactive machine for our physical world requires understanding uncertainties and mitigating their ripple effects. An exploration of the integration of Bayesian reasoning and machine learning for modeling uncertain systems and mitigating their potential impact.

For the last few months, I have been bothered by the ideology peddled by most employers of labour. This very concern has led me to ask a not-so-popular question am I being approached for employment or opportunities because of the colour of my skin? Perhaps, this question is popular howbeit in the minds of the most concerned few. This rather daunting question, even led to a more stomach-turning one is this equity or equality If my question turns out to be true? Of course, if False, am I being headhunted because of my intelligence, skills, and diversity of my uniqueness? Or is it rather because of prejudice? If True, does this mean I am privileged and profiting from an undeserved opportunity?

As a researcher, when I am faced with a challenging technical problem, especially the ones that leave me tasking for days, I am led to examine the base class. In object-oriented programming, a base class is a fundamental template or blueprint on which other classes are built. These newly created classes inherit functionalities, methods, and principles from the base class. It is also to be noted that new methods, principles, and ideology can be created which can override the inherited methods. Please hold this thought as this will make more sense soon. Back to my original ponder, the questions I have asked myself for weeks have led me to this one question. Which is best Equality or Equity? Or succinctly put Which is the more noble, just, and fair goal- Equity or Equality? While both have inherited the ideas of social justice, fairness of rights and opportunities, one more than the other, is overriding the very fundamental truth of the base class while claiming to belong to the base class.

As a society, we constantly debate over whether equality or equity is the more desirable goal. On the surface, the two concepts may appear to be interchangeable, but upon deeper examination, it becomes clear that they represent fundamentally different ways of thinking about the world.

Equality is the absolute ideal that everyone should be treated equally, regardless of background or characteristics. This is a noble goal and one that is deeply ingrained in our culture. The idea that all people should be treated with dignity and respect is a fundamental principle of democracy. However, the problem with this approach is that it assumes that everyone starts from the same place and that the same opportunities are available to everyone.

This is a fallacy. In fact, there is no one equal person and in truth, people have different starting points and challenges to overcome. Some individuals may have had a privileged upbringing, while others may have struggled with poverty or discrimination. Treating everyone the same, without taking these differences into account, can perpetuate inequality, defeating the main purpose of fairness, diversity, and inclusion.

Equity, however, is the idea that everyone should have and be provided/presented with an equal opportunity to succeed. This means that individuals and groups who have been traditionally marginalized may require additional resources or support to achieve the same level of success as those who have not faced such barriers.

To achieve equity, we must be willing to acknowledge and address the ways in which structural inequalities exist in our society. This requires us to take a step back and examine the systems and institutions that shape our lives. We must ask ourselves: Are the playing field and opportunities equal for all individuals? Are certain groups or individuals facing barriers or discrimination that make it harder for them to succeed? It is only by acknowledging these difficult truths and taking steps to address them, that we can truly achieve a society that is fair and just for all. Equality may be a nice idea, but it is not enough. We must strive for equity if we are to create a society in which everyone can reach their full potential.

It is however important to note that achieving equity does not mean that everyone will have the same outcome, but rather that everyone will have the same opportunity to succeed. This means that some individuals may still achieve more success than others, but it will not be due to systemic barriers or discrimination that has constantly plagued our society. More importantly, equity is not about granting preferential treatment to certain groups or individuals, but rather about levelling the playing field and providing the necessary resources and support to overcome barriers.

Additionally, equity must be seen as an ongoing-continuous process, as society is ever-changing and dynamic- opportunities to address inequalities, challenges, and discrimination will always arise. In practice, achieving equity may involve a variety of actions, such as implementing policies and practices that promote diversity and inclusion, creating more accessible educational and job training programs, and addressing biases in hiring and promotion practices.

Ultimately, the goal of equity is to create a society in which everyone can reach their full potential, regardless of their background or characteristics. It's not only morally right but also beneficial for society, as a diverse and inclusive society is more productive and innovative.

In conclusion, while equality is a very noble goal, it is not enough to achieve a truly just and fair society. Equity is the more desirable goal, as it acknowledges and addresses the structural inequalities that exist in our society and ensures that everyone has an equal opportunity to succeed. This requires us to be willing to look beyond the surface and examine the systems and institutions that shape our lives. Only by achieving equity can we create a society in which everyone can thrive.

]]>Over the last couple of months, I realised that I have had to write the same codes all over again and manually do some tasks that I could have easily automated. These usually lead to a lot of boilerplate codes. Being the curious mind that I am- I decided to write a package that I could easily install on my PC and run these tasks whenever I so desire. I can also plug this package as part of a larger project code base.

So, here is introducing `[Ormedian-Utils](https://pypi.org/project/ormedian-utils/#description)`

a python package for basic CV tasks.

As of now `ormedian-utils`

can do the following

- Save frames from videos, camera feed or a folder containing more than one video.
- move specific files from myriads of other files.
- Resize images in folders or multiple folders.

Read the docs here .

I hope you find the package useful.

]]>But this quote has stayed with me for the last 3 months.

`If you ever feel like a failure, remember failure is part of succeeding`

This has kept me going. I hope whatever you are facing now or will face in the future you will have the strength to pull through and come to the realisation that it's all part of the process. 9th, August 2022.

]]> I am currently working on Self-Learn-Your-Key Gaze (*SLYKGaze*), A Gaze Estimation Technique that minimizes domain expertise limitation and aleatoric uncertainties in learning-based gaze estimation.

I will be joining Belfast Metropolitan College in September 2022 as a Part-Time Lecturer in Machine Learning, and Part-Time Lecturer in Python.

In July 2022, our ethics application was accepted, hence we are set to collect the first-of-its-kind dataset.

Status | Milestone | Goals | ETA |

ðŸš€ | H-E-O | 0 / 1 | March 2022 |

ðŸš€ | SLYKGaze | 5 / 10 | `in progress` |

ðŸš€ | HRI Dataset | 4/ 10 | `in progress` |

ðŸš€ | RDSH - A deep neural network for pose estimation with densenet as the backbone. collaborative work with [Federico Zocco] | 1 / 3 | `ongoing` |