How to Achieve Equality in AI for Women – Part 3

In this 3 part series, we will have a detailed look at some of the biggest challenges that women AI enthusiasts face in their field of interest. We will also see how we can go about dealing with these issues in order to ensure that women receive equal opportunities in the field of AI. In the first article, we discussed how women in AI experience gender bias and stereotyping. In the second article, we read about how women face challenges in career advancement and maintaining a work-life balance. In both articles, we also went through some important steps that we need to take to reduce / abolish such issues for women. In this article, which is the last of the series, we will go through the final areas in which women still struggle – Access to resources and Recognition. What Challenges do Women Face Regarding Access to Resources and Recognition? Let’s have a look at some of the challenges that women in AI face in these two areas. Access to Resources Recognition Now that we are aware of the challenges women face in these areas, let’s see what can be done to overcome them. Overcoming Challenges that Women Face Regarding Access to Resources and Recognition We can help women in AI gain better access to resources and proper recognition in the following ways – Supporting women in AI involves not only addressing biases and promoting career advancement but also enhancing access to resources and recognizing their contributions. If we all intentionally make the required efforts, we can ensure that women in AI receive equal treatment in their sector and are thus able to achieve their goals as they strive for success in the field of AI.

How to Achieve Equality in AI for Women – Part 2

In this 3 part series, we will have a detailed look at some of the biggest challenges that women AI enthusiasts face in their field of interest. We will also see how we can go about dealing with these issues in order to ensure that women receive equal opportunities in the field of AI. In the previous article, we saw how women in AI experience gender bias and stereotyping. We also went through some important steps that we need to take to reduce / abolish such issues for women. In this article, we will explore the challenges that women in AI face when it comes to career advancement and work-life balance. We will then see what we can do to help support women in these areas so that they can overcome these obstacles. What are the Career Advancement and Work-Life Balance Challenges? Women in AI can, at times, face hurdles in the growth of their career, as well as in maintaining a healthy balance between their job and their personal lives. Let’s have a look at the kinds of challenges that they face in each of these areas. Career Advancement Work-Life Balance Without the right kind of support, it can be difficult for women in AI to progress in their careers and to find the right work-life balance. We therefore need to do what we can to help women in AI overcome these challenges. Overcoming Challenges in Career Advancement and Work-Life Balance Now that we are aware of these specific obstacles, let us have a look at what needs to be done to help women overcome them. Thus, by implementing transparent criteria, providing professional development, offering flexible work arrangements, and fostering supportive networks, we can create an environment where women in AI can thrive in their careers while maintaining a healthy balance of work and personal life.

How to Achieve Equality in AI for Women – Part 1

In this 3 part series, we will have a detailed look at some of the biggest challenges that women AI enthusiasts face in their field of interest. We will also see how we can go about dealing with these issues in order to ensure that women receive equal opportunities in the field of AI. It is common knowledge that women face plenty of challenges in various work sectors, primarily due to the fact that they are usually a minority in their field. However, with the growing number of women choosing to work along with / instead of being a homemaker, there has been a rise in the number of women within a particular sector. Despite this significant increase, we still find that there are some women-specific issues that constantly arise, even in the IT domain. Gender bias, stereotyping, underrepresentation, and sometimes an unwelcoming workplace culture are some such significant hurdles. Thus, by implementing targeted solutions, we can create a more inclusive and equitable environment in AI for women. That said, let’s dive into the first set of challenges – Gender bias and stereotyping. What is Gender Bias and Stereotyping? Women often face implicit and explicit biases that can affect hiring, promotions, and everyday interactions. Some examples include: While conditions have greatly improved over the years, we still see many cases of bias and stereotyping occurring. It is therefore necessary to understand what these issues are, and then to take the required steps to help women overcome these challenges. Overcoming Gender Bias and Stereotyping Some ways of overcoming the challenges of gender bias and stereotyping include – Addressing the challenges faced by women in the AI sector requires a multifaceted approach. By tackling gender bias and stereotyping, by increasing representation and visibility, and by fostering an inclusive workplace culture, we can create an environment in the field of AI for women thrive. Together, we can build an AI industry that creates and promotes equal opportunities for men and women alike.

A Gentle Introduction to Gradient Descent

Confused about gradient descent in machine learning? Here’s what you need to know… Introduction: In machine learning and optimization, gradient descent is one of the most important and widely used algorithms. It’s a key technique for training models and fine-tuning parameters to make predictions as accurate as possible. But what exactly is gradient descent, and how does it work? In this blog post, we will explore gradient descent in simple terms, use a basic example to demonstrate its functionality, dive into the technical details, and provide some code to help you get a better understanding. What is Gradient Descent? In Simple Terms… Gradient descent is an optimization algorithm that minimizes the cost function or loss function of a machine learning model. The goal of gradient descent is to adjust the parameters of the model (such as weights in a neural network) to reduce the error in predictions, improving the model’s performance. In other words, the process involves taking steps that go in the direction of the steepest decrease of the cost function. To help you visualize gradient descent, let’s consider a simple example. Imagine you’re standing on a smooth hill, and your goal is to reach the lowest point. However, it is a new moon night and there are no lights around you. You can’t see anything, but you can feel the slope beneath your feet. So, you decide to take a small step in the direction of the steepest downward slope (where the ground slopes the most), and then reassess your position. You repeat this process: take a step, check the slope, take another step, and so on—each time getting closer to the lowest point. In the context of gradient descent: Gradient Descent in Technical Terms Let’s break it down into more technical language. In machine learning, you have a model that tries to make predictions. The cost function measures how far the model’s predictions are from the actual results. The objective of gradient descent is to find the model’s parameters (weights, biases, etc.) that minimize this cost function. Here’s how gradient descent works mathematically: The update rule looks like this: θ=θ−α⋅∇J(θ) Where: Gradient Descent Example Code Let’s implement gradient descent for a simple linear regression problem using Python. In this case, we want to fit a line to some data points. Our cost function will be the Mean Squared Error (MSE), which measures how far the predicted points are from the actual data points. Let’s start by importing the necessary libraries and generating some data. Now, let’s define the cost function and its gradient. We can now implement the gradient descent function that will iteratively update our parameters θ. Next, we will initialize our parameters θ and start the gradient descent process. Finally, let’s plot the cost history to see how the cost function decreases over time. This plot should show a steady decrease in the cost as the gradient descent algorithm updates the parameters and moves toward the minimum. Types of Gradient Descent There are several variants of gradient descent, each with its own characteristics, as shown below – Thus, we see that the different types of gradient descent differ in how much data they use at each step to update the parameters: Conclusion In summary, gradient descent is a foundational algorithm in machine learning that helps us optimize the parameters of a model to minimize the error. Whether for simple linear regression or more complex deep learning models, understanding how gradient descent works is essential for designing and training effective models. By adjusting the learning rate and choosing the right variant of gradient descent, we can ensure that the algorithm converges to the optimal solution. With the help of gradient descent, machine learning models become smarter and more efficient, empowering us to make predictions and solve problems in countless applications. Whether you’re working with small datasets or building large-scale systems, mastering gradient descent is a crucial skill for any data scientist or machine learning practitioner.

Deploying a Machine Learning Model for Predicting House Prices with Amazon SageMaker: A Step-by-Step Guide

Learn how to build a Machine Learning model with AWS for house price prediction. Quick Takeaways Introduction: Why House Price Prediction Matters Imagine you’re a real estate agent sitting across from a client who wants to list their property. They ask: “What do you think my house is worth?”You could give them a ballpark figure based on gut feeling, past sales, or comparable properties. But what if you could answer instantly – With data-backed precision? That’s where machine learning meets real estate. With Amazon SageMaker, you can build and deploy a prediction engine that considers dozens of factors, like square footage and location, and outputs a price in seconds. In this blog, we’ll walk through: By the end, you’ll have a working, production-grade ML service for property valuation. Understanding the Problem: Why Real Estate Pricing Fits a Regression Model When we talk about real estate price prediction, we’re dealing with regression: A branch of supervised machine learning that predicts continuous numerical values rather than discrete categories. Think about it: Our model’s mission is simple but powerful: Take in a set of property features and return an estimated selling price that’s as close as possible to the real-world market value. Challenges in Real Estate Price Prediction Like many machine learning problems, predicting house prices isn’t just about choosing a good algorithm. It’s about handling messy, unpredictable, and sometimes incomplete real-world data. Some of the the main hurdles that you may encounter include – 1. Data Inconsistency Example: If TotalBsmtSF is missing, the model might underestimate prices for houses that actually have large finished basements. Solution in our workflow: Use Pandas to clean and impute missing values with medians or modes so the training data is consistent. 2. Regional Price Variations Two identical houses can have wildly different prices depending on location. These variations make it essential for the model to understand geographic context, whether through ZIP codes, latitude/longitude, or regional price indexes. Solution in our workflow: Include location-related features in the dataset or transform them into numerical variables so the model can learn location-based pricing trends. 3. External Economic Influences Real estate prices don’t exist in a vacuum. They’re influenced by broader economic conditions – While our model might not capture every economic variable in its first version, understanding these influences helps when deciding what extra data to add later. Our Step-by-Step Approach to Tackle These Challenges To tackle these challenges, we’ll follow a four-phase strategy: 1. Data Preprocessing 2. Model Training 3. Deployment 4. Integration Before we begin, we need to prepare the dataset. We will see how to do this in the next section. Dataset Preparation For this tutorial, we’ll use the Kaggle House Prices – Advanced Regression Techniques dataset, but you can replace it with your own real estate data. Key Features of Our Dataset: Size: Target Variable: SalePrice — The actual sale price of each property. Aside from the target variable, let’s have a look at some of the more useful features that we’ll be using: The dataset actually contains 79 explanatory variables in total, but for our first version of the model, we’ll work with a smaller, cleaner subset of key predictors. This keeps the tutorial focused and easy to follow, while still giving strong predictive performance. Data Cleaning with Pandas Why this matters:Clean data leads to better predictions. Missing values or inconsistent types can break your training job. Setting Up Amazon SageMaker Amazon SageMaker is AWS’s fully managed ML service. It handles everything from training to deployment. We’ll explore three approaches: A. AWS Console Setup Go to the SageMaker dashboard. B. AWS CLI Setup C. Boto3 SDK Setup Model Training in SageMaker We’ll train an XGBoost regression model, because it is fast, accurate, and well-supported in SageMaker. Deploying the Model Making Predictions Once your model is deployed and the endpoint is live, it’s time to see it in action.This is where your work so far – Cleaning the data, training the model, deploying it – All turns into something tangible that you can actually use. Let’s say you run the prediction code: What Happens Behind the Scenes When you send this request to the SageMaker endpoint: If everything is set up correctly, your output will look something like this: Pro Tips for Interpreting Predictions Real-World Use Cases Building an ML model is exciting, but what truly makes it powerful is how it’s used in the real world. A trained house price prediction model deployed with Amazon SageMaker can become the backbone of many products and services, saving time, reducing human error, and offering insights at scale. Let’s walk through three impactful scenarios. 1. Real Estate Websites: Instant Property Value Estimates Imagine visiting a real estate website like Zillow or MagicBricks. You type in your home’s details (lot size, year built, number of bedrooms) and instantly see an estimated selling price. Behind the scenes, this is exactly what your SageMaker model can do: Why it’s valuable: 2. Bank Loan Departments: Automating Mortgage Approvals Banks and mortgage lenders often spend days (sometimes weeks) manually assessing property values before approving a home loan. This involves sending appraisers, collecting documents, and checking local sales data. With a SageMaker-powered price prediction service: Why it’s valuable: 3. Property Investment Apps: Finding High-ROI Deals Property investors are constantly looking for undervalued properties that could yield a strong return after renovation or resale. Your model can be integrated into an investment app to: For example: If a property is listed at $250,000 but your model predicts it’s worth $280,000, that’s a potential $30,000 margin before even considering appreciation or rental income. Why it’s valuable: Pro Tip: These three scenarios aren’t mutually exclusive. A single SageMaker endpoint can serve multiple apps and clients. You can run your valuation API for a real estate website and a bank’s loan department and an investment app, all with the same underlying model. Do’s and Don’ts for Creating Your Application While this system works great and is relatively easy to develop, there are some best practices that

Real-Time Image Moderation for User-Generated Content with Amazon Rekognition (Full AWS Tutorial + Code)

Follow this tutorial to build a real-time image moderation application using AWS. Overview: This blog explains how to build a real-time image moderation system for user-generated content (UGC) using Amazon Rekognition, Amazon S3, and AWS Lambda. It covers: By the end, readers will know exactly how to deploy an automated, scalable, and cost-efficient moderation workflow that flags and handles harmful images instantly upon upload. Quick Takeaways Introduction If you run a social platform, e-commerce marketplace, or online community, you already know: User-generated content (UGC) is both your biggest growth driver and your biggest liability. Images uploaded by users can help your platform thrive, but they can also introduce inappropriate, unsafe, or even illegal content that can damage your brand, harm your users, and get you into legal trouble. Manual moderation isn’t scalable. Your users expect instant uploads and real-time feedback. That’s where AI-powered moderation comes in. Today, we’re going to build a fully automated, real-time image moderation pipeline using Amazon Rekognition, AWS S3, and Lambda, so that you can detect and block unsafe images before they ever reach your audience. By the end of this tutorial, you’ll have: What Is Real-Time Image Moderation and Why Does It Matter Real-time image moderation means that as soon as a user uploads an image, the system will: It matters because it ensures – Why Use Amazon Rekognition for Image Moderation? Amazon Rekognition is an AWS service for image and video analysis using deep learning.For moderation, its DetectModerationLabels API detects: We will use Amazon Rekognition because it is – Architecture Overview: Here’s the flow that we will build: Application Workflow: Step-by-Step Tutorial Step 1 — Create an S3 Bucket for User-Uploaded Images You’ll need two buckets: AWS CLI: Bucket policy tip: Make sure your bucket does not allow public uploads without authentication — use pre-signed URLs for security. Step 2 — Create an IAM Role for Lambda Your Lambda needs permission to: AWS CLI: trust-policy.json: Attach permissions: Step 3 — Create the Lambda Function We’ll write the moderation logic in Python. lambda_function.py: Deploy via AWS CLI: Step 4 — Set Up S3 Event Notifications In S3 console: Or via CLI: notification.json: Real-World Use Cases User-generated content is the lifeblood of many online platforms, but it also comes with significant risks. Without proper moderation, harmful, inappropriate, or illegal content can slip through, damaging user trust and exposing the platform to legal issues. AWS services, such as Amazon Rekognition, offer scalable, automated ways to detect and handle such content before it reaches the public. Best Practices & Common Pitfalls When creating an application to moderate user-generated content (UGC) using AWS services like Rekognition, it’s important to go beyond just integrating the API. A thoughtful approach ensures you maintain both platform safety and user trust. Below are key best practices to follow, and pitfalls to avoid. Best Practices to Follow To ensure your moderation system is both effective and user-friendly, focus on these proven approaches – Common Pitfalls to Avoid Even a well-designed system can fail if common oversights aren’t addressed – Scaling & Optimization When building an AI-powered image moderation pipeline, handling large volumes of image uploads efficiently is critical. A few strategies can help maintain performance while keeping costs under control: 1. Use SQS between S3 and Lambda to handle traffic spikesInstead of triggering Lambda functions directly from S3 events, send event notifications to Amazon SQS (Simple Queue Service). This creates a buffer between the upload event and the processing step. It ensures that sudden bursts of image uploads, such as during a marketing campaign or seasonal sale, won’t overwhelm your processing functions. Lambda can then pull messages from SQS at a controlled rate, allowing you to scale horizontally while avoiding function throttling. 2. Store flagged image metadata in DynamoDB for faster reviewWhen an image is flagged by Amazon Rekognition or a custom moderation model, store its metadata (image ID, user ID, timestamp, reason for flagging) in DynamoDB. This enables moderators to quickly filter, sort, and search flagged images without reprocessing them. By keeping this data in a NoSQL database, you get millisecond query times, even as the dataset grows to millions of records. 3. Process in multiple AWS regions for lower latencyIf your application has a global user base, processing moderation requests in a single AWS region can create delays for users located far from that region. By deploying your moderation pipeline in multiple AWS regions (using services like S3 Cross-Region Replication and Lambda in regional deployments), you can reduce round-trip times and provide a faster, more responsive experience. This also improves redundancy – If one region experiences downtime, traffic can be automatically routed to another. Troubleshooting Even with a well-configured pipeline, issues can crop up due to misconfigurations, missing permissions, or processing limits. This section highlights common problems you might face when integrating Amazon S3, AWS Lambda, and Amazon Rekognition, along with quick fixes to get your system back on track. Problem 1: Large image processing failsFix: For very large files, consider using pre-signed URLs to allow Rekognition to access the file directly from S3, reducing memory and payload size issues. Also, increase the Lambda timeout and memory allocation to handle longer processing times without timeouts. Problem 2: S3 event not triggering LambdaFix: Verify that the S3 bucket has the correct event notification configuration pointing to the Lambda function. Also, check that the Lambda function’s resource-based policy allows invocation from the S3 service. Problem 3: Permission denied errorsFix: Ensure the IAM role assigned to the Lambda function has the required permissions—namely AmazonS3FullAccess and AmazonRekognitionFullAccess. Missing or overly restrictive policies can prevent Lambda from reading images from S3 or invoking Rekognition APIs. FAQs Q: What is Amazon Rekognition?A: AWS’s deep learning service for image/video analysis, including content moderation. Q: How accurate is Rekognition?A: High accuracy, especially above 80% confidence thresholds. Q: Is this free?A: AWS offers a free tier, but charges apply after limits. Conclusion By combining Amazon Rekognition, S3, and Lambda, you can build a real-time, automated image moderation system that keeps

How to Build a Serverless Customer Feedback System with AWS Lambda & DynamoDB (A Step-by-Step Guide)

Learn how to collect, store, and analyze customer feedback in real time using AWS and with zero servers to manage. Overview This tutorial walks you through creating a serverless customer feedback app using AWS Lambda, DynamoDB, API Gateway, and Amazon Comprehend. You’ll learn how to: Introduction Customer feedback is gold – But only if you can capture it easily and analyze it fast enough to act on it. The problem? Many small businesses and startups either rely on clunky Google Forms or expensive survey platforms. What if you could build your own feedback system that’s fast, cost-efficient, and runs without you having to manage any servers? That’s exactly what we’re going to do today using AWS Lambda, API Gateway, DynamoDB, and Amazon Comprehend (for optional sentiment analysis). You’ll end up with a serverless system that: Why This Matters in 2025 Customer feedback is a competitive advantage, especially in an AI-first business world. A serverless AWS solution gives you automation, instant insights, and almost zero infrastructure cost, which makes it ideal for businesses that want to move fast. Real-World Use Cases 1. Restaurants Tracking Diner Reviews in Real Time Imagine a busy Friday night at your restaurant. Reviews are pouring in from Google, Yelp, TripAdvisor, and even Instagram comments. By the time you manually check them, the unhappy diners have already gone home — and possibly told 10 friends. With AWS Lambda + DynamoDB + Amazon Comprehend, you can: Why it matters:Responding within minutes instead of days can turn a 1-star review into a repeat customer, and create a “wow” moment that people talk about online. 2. SaaS Products Analyzing Feature Requests and Bug Reports If you run a SaaS product, your feedback inbox is probably a mix of bug complaints, feature requests, “how do I” questions, and random praise. Manually sorting these is tedious, inconsistent, and slow. Using AWS: Why it matters:Your product team gets actionable, categorized insights in real time. No more missing high-impact bugs or delaying popular feature launches. 3. E-Commerce Stores Flagging Negative Delivery Experiences Instantly In e-commerce, shipping delays and damaged products can erode trust quickly. But if you only see customer complaints during your weekly review, you’ve already lost them. Here’s how AWS can help: Why it matters:Instead of letting a negative delivery experience go viral, you proactively fix it, and possibly turn that customer into a brand advocate. Now that we understand the importance of customer feedback, let’s move ahead to developing the actual application using AWS. Step 1: Understand the Architecture Essentially, the system will have the following architecture: Here’s how the workflow looks: Step 2: Set Up DynamoDB Table We’ll start by creating a table to store feedback. Step 3: Create AWS Lambda Function (Python) Next, we’ll create a Lambda function that stores feedback in DynamoDB and analyzes sentiment. Create a new Python file named ‘lambda_function.py’ and paste the following code into it. Code Explanation: Step 4: Deploy with API Gateway We will now create an API Gateway Endpoint. Step 5: HTML Feedback Form The next step is to create a basic HTML-based feedback form. Step 6: Test the System Extra Features That Can Be Added In addition to the current functionality, we can add some extra features to improve the overall usability of the system. Some of these features include – Why This Approach Works The benefits of using this system are as follows – Real-World Action Plan Here’s how you can deploy this serverless architecture for real-world use – What’s Coming Next in AI-Driven Feedback Analysis AI-powered feedback analysis is moving beyond just “spotting a bad review”. It is evolving into a continuous, automated customer relationship system. Here’s where things are headed – Conclusion By combining AWS Lambda, API Gateway, DynamoDB, and Amazon Comprehend, we’ve created a fully serverless customer feedback system that’s affordable, scalable, and intelligent. This isn’t just about collecting feedback. It’s about understanding your customers and improving based on what they tell you. And since the system costs almost nothing when idle, it’s perfect for startups and small businesses looking to get smarter without having to deal with large and unnecessary expenses. FAQs Q: How do I create a serverless customer feedback app with AWS?A: You can build it with AWS Lambda, API Gateway, DynamoDB, and Amazon Comprehend to process and store feedback without managing servers. Q: What’s the cheapest way to store customer feedback in AWS?A: DynamoDB is cost-effective and scales automatically, making it ideal for feedback storage. Q: Can AWS analyze customer sentiment automatically?A: Yes, Amazon Comprehend detects Positive, Negative, Neutral, and Mixed sentiments in feedback. Q: Do I need AWS certification to build this?A: No. You just need an AWS account and basic understanding of Lambda and DynamoDB.

NumPy ndarray Vs. Python Lists

Article Contributed By: Chandrika Mutalik NumPy is a package for scientific computing and used to overcome Python’s limitation of slow processing time for multidimensional arrays via lists. In other words, it is an extension to Python to use multidimensional arrays as native objects. NumPy arrays are especially written keeping this multi-dimension use case in mind and hence, provide better performance in terms of both speed and memory.  Why is it More Efficient? Python’s lists do not have to be homogeneous. They can have a string element, an integer and a float. To create a structure to support all types, CPython implements it like so: Here, PyObject and PyTypeObject store methods, i/o and subclassing attributes.  “`typedef struct _object { _PyObject_HEAD_EXTRA Py_ssize_t ob_refcnt; struct _typeobject *ob_type; } PyObject; typedef struct _typeobject { PyObject_VAR_HEAD const char *tp_name; /* For printing, in format “.” */ Py_ssize_t tp_basicsize, tp_itemsize; /* For allocation */ /* Methods to implement standard operations */ destructor tp_dealloc; Py_ssize_t tp_vectorcall_offset; getattrfunc tp_getattr; setattrfunc tp_setattr; PyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2) or tp_reserved (Python 3) */ reprfunc tp_repr; /* Method suites for standard classes */ PyNumberMethods *tp_as_number; PySequenceMethods *tp_as_sequence; PyMappingMethods *tp_as_mapping; /* More standard operations (here for binary compatibility) */ hashfunc tp_hash; ternaryfunc tp_call; reprfunc tp_str; getattrofunc tp_getattro; setattrofunc tp_setattro; /* Functions to access object as input/output buffer */ PyBufferProcs *tp_as_buffer; /* Flags to define presence of optional/expanded features */ unsigned long tp_flags; const char *tp_doc; /* Documentation string */ /* Assigned meaning in release 2.0 */ /* call function for all accessible objects */ traverseproc tp_traverse; /* delete references to contained objects */ inquiry tp_clear; /* delete references to contained objects */ inquiry tp_clear; /* Assigned meaning in release 2.1 */ /* rich comparisons */ richcmpfunc tp_richcompare; /* weak reference enabler */ Py_ssize_t tp_weaklistoffset; /* Iterators */ getiterfunc tp_iter; iternextfunc tp_iternext; /* Attribute descriptor and subclassing stuff */ struct PyMethodDef *tp_methods; struct PyMemberDef *tp_members; struct PyGetSetDef *tp_getset; struct _typeobject *tp_base; PyObject *tp_dict; descrgetfunc tp_descr_get; descrsetfunc tp_descr_set; Py_ssize_t tp_dictoffset; initproc tp_init; allocfunc tp_alloc; newfunc tp_new; freefunc tp_free; /* Low-level free-memory routine */ inquiry tp_is_gc; /* For PyObject_IS_GC */ PyObject *tp_bases; PyObject *tp_mro; /* method resolution order */ PyObject *tp_cache; PyObject *tp_subclasses; PyObject *tp_weaklist; destructor tp_del; /* Type attribute cache version tag. Added in version 2.6 */ unsigned int tp_version_tag; destructor tp_finalize; vectorcallfunc tp_vectorcall; #ifdef COUNT_ALLOCS /* these must be last and never explicitly initialized */ Py_ssize_t tp_allocs; Py_ssize_t tp_frees; Py_ssize_t tp_maxalloc; struct _typeobject *tp_prev; struct _typeobject *tp_next; #endif } PyTypeObject;“`However, NumPy’s array uses PyArrayObject defined considering the type of operations that it would deal with. The source for the above definitions can be found on GitHub:  https://github.com/numpy/numpy/blob/master/numpy/core/include/numpy/ndarraytypes.h The element size is fixed for each ndarray and can be accessed using: Similarly, there are other macros and definitions for PyArray in the above link and can be used to check how getters and setters work.  Official SciPy documentation for PyArrayObject: https://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html#c.PyArrayObject

Machine Learning Concepts for Beginners

Let’s face it – EVERYONE wants to know about Machine Learning. Considering the immense job-creating, life-revolutionising potential that it has, it is no surprise that it is in such high demand now. There are so many articles, videos, and books everywhere! The amount of online content is truly spectacular, but for a beginner, it can be quite intimidating. It’s almost like being given a plethora of cuisines, and then being instructed to review them all. Where would you start? How would you consume all of it? How much of each would you need to have until you can come up with an accurate review? For this reason, this article aims to consolidate some of the Machine Learning fundamentals into one easy-to-understand article. Thus, those of you who are just getting started can easily learn the basics without being overwhelmed by the technical details. That said, we will now get into the “What”, “Why”, “When”, “Where”, and “How” of Machine Learning.  Let’s begin! WHAT is Machine Learning? Machine Learning is the process by which a machine learns how to think like a human being in order to perform a specific task, without being explicitly programmed. WHY do we use Machine Learning? By training a machine to think like a human being, the execution of certain tasks becomes easier, quicker, and much more efficient. WHEN do we use Machine Learning? Machine Learning was invented by some very ambitious people who desired to develop an intelligence that could resemble, if not surpass, natural human intelligence. The term ‘Machine Learning’ was coined by Arthur Samuel in the 1950s. This was a time when Alan Turing proposed the ‘Learning Machine’, and Marvin Minsky and Dean Edmonds built the first Neural Network machine. Within that same decade, Arthur Samuel invented a Checkers playing machine, and Frank Rosenblatt developed the very first Perceptron. From there, Machine Learning steadily began to grow. WHERE do we use Machine Learning? Machine Learning has come so far, from playing games to recommending products to customers. The more the technology advanced, the better its applicability became. Listed below are five important applications of Machine Learning that are commonly used, easy to remember, and good to know – Spam Filter: Spam emails can automatically be detected within your inbox and stored in your Spam folder. That way, it doesn’t interfere with your more important emails. It also reduces the amount of time and effort you would have to spend sorting out your inbox. Recommendation Systems: Most online stores use Machine Learning to recommend items based on the user’s recent activity and requirements. This prevents customers from getting irrelevant suggestions, and increases the chances of them making a purchase. Virtual Assistants: They assist users in their daily requirements like setting alarms, making lists, and so on. They then store data from previous tasks, and tailor their performance based on these preferences. Search Engines: Search Engines use Machine Learning Algorithms to find and display results that are most accurate to a user’s search. They even filter them out based on the user’s past activity. GPS: Travelling has become so much easier thanks to GPS apps. These systems use Machine Learning to make travelling less difficult. They can show people their current location, the distance between two places, the estimated time it would take to reach another location, and the amount of traffic that could either increase or decrease their time of arrival. HOW does Machine Learning Work? Now that we know some of the important facts of Machine Learning, we shall proceed to the more interesting part – Understanding how Machine Learning works. The first thing to know is that Machine Learning is mainly of two types: Supervised Learning: It involves the use of labelled data (where the number of classes are known). Unsupervised Learning: It involves the use of unlabelled data (where the number of classes are unknown). Let’s have a look at five differences between Supervised Learning and Unsupervised Learning. Supervised Learning: It is a method of Machine Learning that deals with labelled input data. It is used for Regression (predicting continuous variables) and Classification (predicting categorical variables). It is more time consuming and accurate. Some applications of include stock price prediction, object detection, spam detection, and sentiment analysis. Unsupervised Learning:  It is a method of Machine Learning that deals with unlabelled input data. It is used for Clustering (finding patterns in the data) and Association (identifying relationships between elements in the dataset). It is less time consuming and accurate. Some applications include credit card fraud detection and customer behavior analysis.   There is also a third type of Machine Learning method, known as Reinforcement Learning.  Reinforcement Learning: It is a method of Machine Learning that aims to make the most optimal decision in order to maximize the reward.  It uses algorithms that learn from previous outcomes and then decide what action to take next. Thus, decisions are made sequentially, i.e., the next input is based on the previous output, unlike supervised and unsupervised learning, in which decisions are made only based on the initial input data. There are two types of reinforcement learning – Positive Reinforcement (adding a positive stimulus or reward after some behavior to increase the likelihood of its recurrence) and Negative Reinforcement (removing a negative stimulus after some behavior to increase the likelihood of its recurrence).  For example, positive reinforcement would be giving a dog their favorite toy as a reward for behaving, whereas negative reinforcement would be taking the dog’s favorite toy away when it misbehaves.  Some applications include text prediction and gaming.   Now that we are familiar with the types of Machine Learning, let’s briefly go through some of the different algorithms used in Machine Learning.    Types of Supervised Machine Learning Algorithms:   Linear Regression Support Vector Machines (SVM) Neural Networks Decision Trees Naive Bayes Nearest Neighbour Types of Unsupervised Machine Learning Algorithms:   k-means clustering Association rule Principal component analysis Types of Reinforcement Learning    Q-Learning Deep Adversarial Networks   Last but not