Artificial Intelligence

Normal Distribution – Numerical Problems with Solutions

Normal Distribution – Numerical Problems with Solutions Author: Bindeshwar Singh Kushwaha Platform: PostNetwork Academy 1. Definition of Normal Distribution A continuous random variable $X$ follows a Normal Distribution with mean $\mu$ and variance $\sigma^2$ if its probability density function (PDF) is: $$ f(x) = \frac{1}{\sigma \sqrt{2\pi}} e^{ -\frac{(x – \mu)^2}{2\sigma^2} }, \quad -\infty < x […]

Normal Distribution – Numerical Problems with Solutions Read More »

Normal Distribution A Detailed Step-by-Step Explanation

Normal Distribution A Detailed Step-by-Step Explanation By Bindeshwar Singh Kushwaha PostNetwork Academy Introduction: Random Variables A random variable (r.v.) is a function that assigns a numerical value to each outcome of a random experiment. There are two main types of random variables: Discrete Random Variable: Takes countable values (e.g., number of heads in 3 coin

Normal Distribution A Detailed Step-by-Step Explanation Read More »

Support Vector Machines Made Easy | SVM Explained with Example

Support Vector Machine (SVM) A Simple Numerical Example – Detailed Explanation Author: Bindeshwar Singh Kushwaha PostNetwork Academy Introduction: Type and Purpose of SVM Type of Algorithm: Supervised Machine Learning Algorithm Used for Classification and Regression (SVR) Discriminative Model – finds decision boundaries Known as a Maximum-Margin Classifier Purpose: Find the optimal hyperplane that separates classes

Support Vector Machines Made Easy | SVM Explained with Example Read More »

Geometric Distribution Made Simple | Stepwise Approach #176 Data Sc. and A.I. Lect. Series

  Geometric Distribution Made Simple | Stepwise Approach Bindeshwar Singh Kushwaha PostNetwork Academy  Geometric Distribution Let a sequence of Bernoulli trials be performed, each with constant probability \(p\) of success and \(q = 1 – p\) of failure. Trials are independent, and we continue performing them until the first success occurs. Let \(X\) be the

Geometric Distribution Made Simple | Stepwise Approach #176 Data Sc. and A.I. Lect. Series Read More »

Hypergeometric Distribution A Distribution of Dependent Events #175 Data Sc. and A.I. Lect. Series

    Hypergeometric Distribution : A Distribution of Dependent Events By Bindeshwar Singh Kushwaha PostNetwork Academy Introduction In the previous sections, we studied distributions such as the binomial distribution. The binomial distribution assumes that each trial is independent and the probability of success remains constant. However, in many real-life problems, selections are made without replacement.

Hypergeometric Distribution A Distribution of Dependent Events #175 Data Sc. and A.I. Lect. Series Read More »

Learn about the Discrete Uniform Distribution in probability and statistics with detailed explanations, examples, formulas, and visualizations. Understand its mean, variance, and applications such as die rolls and expected frequency calculations. Presented by Bindeshwar Singh Kushwaha, PostNetwork Academy.

Discrete Uniform Distribution in Statistics

Discrete Uniform Distribution By: Bindeshwar Singh Kushwaha PostNetwork Academy Discrete Uniform Distribution A random variable \( X \) is said to have a discrete uniform distribution if it takes integer values from \( a \) to \( b \) with equal probability. The number of possible values is \[ n = b – a +

Discrete Uniform Distribution in Statistics Read More »

Naive Bayes Classification Algorithm for Weather Dataset

Naive Bayes Classification Algorithm for Weather Dataset Author: Bindeshwar Singh Kushwaha | PostNetwork Academy Introduction to Naive Bayes Classifier Naive Bayes is a probabilistic classification algorithm. It is based on Bayes’ Theorem and the naive independence assumption. Suppose we have a feature vector \(\mathbf{X} = (x_1, x_2, …, x_n)\) and a class \(y\). Bayes Theorem:

Naive Bayes Classification Algorithm for Weather Dataset Read More »

Text Classification with Bag of Words and Naive Bayes

Text Classification with Bag of Words and Naive Bayes Author: Bindeshwar Singh Kushwaha | PostNetwork Academy Understanding Text with Machine Learning Processing and understanding text allows extraction of meaningful information from raw data. Text data can be structured into features that machine learning algorithms can analyze. Machine learning approaches include supervised, unsupervised, and deep learning

Text Classification with Bag of Words and Naive Bayes Read More »

Fitting of Poisson Distribution

Fitting of Poisson Distribution

Fitting of Poisson Distribution Bindeshwar Singh Kushwaha — PostNetwork Academy Introduction Master the technique of fitting the Poisson distribution to real-world frequency data. This tutorial shows a step-by-step method to calculate theoretical frequencies for observed datasets. Key Concepts & Techniques Introduction to Fitting: Fit a theoretical Poisson distribution to experimental data to derive expected frequencies.

Fitting of Poisson Distribution Read More »

Gradient of Softmax + Cross-Entropy w.r.t Logits

Gradient of Softmax + Cross-Entropy w.r.t Logits Author: Bindeshwar Singh Kushwaha – PostNetwork Academy Goal We want to compute: $$ \frac{\partial L}{\partial z_j} $$ Notation: Logits: \(z = [z_1, z_2, \dots, z_C]\) Softmax: \(\hat{y}_i = \frac{e^{z_i}}{\sum_{k=1}^{C} e^{z_k}}\) Cross-Entropy Loss: \(L = -\sum_{i=1}^{C} y_i \log \hat{y}_i\), where \(y_i\) is one-hot. [Insert Neural Network Diagram Here] Loss

Gradient of Softmax + Cross-Entropy w.r.t Logits Read More »

©Postnetwork-All rights reserved.