[EdgeAI - Part 3]: The Integer Trick - How Hardware Optimally Calculates GELU

A deep dive into the fixed-point integer math used to implement the GELU activation function on modern AI hardware, showing how to replace complex computations with fast integer approximations

[EdgeAI - Part 2]: The Integer Trick - How Hardware Really Calculates Softmax

A deep dive into the fixed-point integer math used to implement the Softmax activation function on modern AI hardware, showing how to replace complex computations with fast integer approximations

[EdgeAI - Part 1]: From PyTorch to Silicon - The Alchemist's Secret to Efficient AI

We live in an age of digital magic. We speak to our phones, generate stunning images from text, and get instant answers to complex questions. At the core of this magic are neural networks, often designed and trained in high-level frameworks like PyTorch or TensorFlow. These models are written in the language of floating-point mathematics—a world of decimal points and high precision.

One Model to Forecast Them All - A Look Back at Solving Time-Series at Scale

One Model to Forecast Them All - A Look Back at Solving Time-Series at Scale

Course - Fundamentals to Data Crunching (Student Evaluation)

Evaluation questions for participating students in the course (Fundamentals to Data Crunching)

My Notes on Deep Learning

Practices in Deep Learning

Deep Walk to Deep Learning

Learn deep learning the hard way

Rule Parser for NLU

Parsing the rules efficiently and accurately

Multitask learning in context of NLU

NLU for skill based smart voice agents