"The only way of finding the limits of the possible is by going beyond them into the impossible."

Hi this is Sanjeev. I am passionate about robots and autonomous vehicles --- fascinated by the DARPA 2007 Urban Challenge. My interests include autonomous navigation, path planning, reinforcement learning, machine learning and convex optimization. In future, I would like to build robots that help humans in multitude of tasks. | ||

Hi this is PRATEEK. My interests include Penetration Testing, Wireless Attacks, Intrusion Detection and Forensic Analysis. I am Presently working on modelling an Intrusion Detection system. In the future i would like to become an Information Security Expert. We graduated from the Indian Institute of Technology Roorkee, with B. Tech in Electrical Engineering in 2011. We started searching-eye during 2nd year of our undergrad to share knowledge. It was fun back then; we have moved on with other projects, and no longer active on searching-eye. We hope you find something useful here. |

Recently Viewed

Google's Self-Driving Car: Sebastian Thurn Talk

53:10

Sebastian Thrun, on October 29th, 2012, gave a talk on Autonomous Driving.

Autonomous
Driving
Sebastian
Thrun
Google

Added: 661 days ago by
admin
Views: 2353
Comments: 0

( Not yet rated )

Autonomous Waypoint Generation Strategy for On-Line Navigation in Unknown Environments

3:0

Sanjeev Sharma - August 05, 2012: This video demonstrates a framework for navigation and path planning in unknown 2D- and 3D-environments with limited field of view. It uses reinforcement learning to generate a waypoint in the robot's field of view. A path planner then generates a path to the waypoint. The process is iterated until the robot reaches the goal.

This video is attached to the paper in IROS Workshop on Robot Motion Planning: Online, Reactive, and in Real-Time, 2012. It demonstrates non-holonomic motion planning in unknown environment and other examples.

Sanjeev Sharma and Matthew E. Taylor, *Autonomous Waypoint Generation Strategy for On-Line Navigation in Unknown Environments*. In Proceedings of IROS Workshop on Robot Motion Planning: Online, Reactive, and in Real-Time, 2012.

Path
Planning,
Waypoint
Generation,
Navigation

Added: 752 days ago by
admin
Views: 1939
Comments: 0

( Not yet rated )

High Speed On-Line Motion Planning In Cluttered Environments

2:59

Sanjeev Sharma - July 21, 2012: This video demonstrates an online non-holononic motion planner for navigation in cluttered environments. The algorithm selects a sequence of intermediate goals online, through which the robot navigates. Experimentally the algorithm was tested on a differential drive robot.

(i) Z. Shiller and S. Sharma, *High Speed On-Line Motion Planning in Cluttered Environments*, IROS 2012; (ii)Z. Shiller and S. Sharma, *On-Line Obstacle Avoidance at High Speeds*, Romansy, 2012.

This video is attached to the accepted paper in IROS 2012.

Online
Motion
Planning,
Non-Holonimic
Constraints

Added: 767 days ago by
admin
Views: 1854
Comments: 0

( Not yet rated )

Unconstrained Minimization: Steepest Descent Methods & Convergence Analysis

44:46

SANJEEV SHARMA: 7th Jan 2011. CCO-10/11: P-004, Section-2: Unconstrained Minimization: Steepest Descent Methods and Convergence Analysis.

Contents: Steepest Descent, Coordinate Descent, Newton's Method, Convergence Analysis.

Steepest Descent is one of the algorithms for solving the Unconstrained Minimization problems. It is an iterative algorithm,
where in each iteration it finds a direction, which is a steepest descent direction, with lenth of the descent vector constrained
by some valid norm (||.||). Different norms, for constraining the length of the descent direction, results in different descent
algorithms. The *L _{2}-Norm* results in Gradient Descent Algorithm, the Quadratic Norm (||z||

Steepest Descent Gradient Descent Coordinate Descent Convergence Analysis

Added: 1329 days ago by admin Views: 2622 Comments: 1

( Not yet rated )

Unconstrained Minimization: Convergence Analysis of Gradient Descent Using Line Search

21:54

SANJEEV SHARMA: 12th Dec 2010. CCO-10/11: P-003, Section-2: Unconstrained Minimization: Convergence Analysis & Condition Number Dependence - Gradient Descent.

Contents: Backtracking Line Search, Exact Line Search, Condition Number, Gradient Descent.

Gradient Descent is an algorithm for solving unconstrained minimization problems. This is an iterative algorithm, where in each iteration it finds a direction,
which is a descent direction, and a step-length to move in the descent direction. Step-1 invloves finding a search direction (δx) where δx=-∇f(x) for the Gradient Descent
and step-2 involves finding a step-length t to move in the direction δx. Exact line-search and backtracking line-search may be used to find the step
length. The rate of convergence for the Gradient Descent method depends on the eccentricity of the sublevel sets, which is the condition number of the Hessian of the function at the optimum i.e.
ω(∇²f(x)). This presentation discusses the convergence analysis and condition number dependence of the ROC, for the Gradient Descent Methods with Line-Search methods.

Convergence
Analysis,
Condition
Number,
Gradient
Descent,
Unconstrained
Minimization

Added: 1355 days ago by
admin
Views: 3931
Comments: 0

( Not yet rated )

Unconstrained Minimization: Backtracking Line Search & Gradient Descent

30:15

SANJEEV SHARMA: 3rd Dec 2010. CCO-10/11: P-002, Section-2: Unconstrained Minimization: Backtracking Line Search & Gradient Descent.

Contents: Exact Line Search; Inexact Line Search; Backtracking Line Search; Gradient Descent.

Methods for solving the unconstrained minimization problems include Descent Algorithms. These are iterative solvers which iterates between the two steps to
find the solution. Step-1 invloves finding a search direction (δx) and step-2 involves finding a step-length *t* to move in the direction
δx. The step-length if found using the line search methods. There are two kinds of line search algorithms, the exact & inexact line search. Exact line
search finds the step lenght for which the function f(x+*t*δx) is minimized i.e. *t*=argmin_{(s>0)}f(x+sδx). The inexact line search
just finds the step length such that the function f(x+*t*δx) is approximately minimized. The most popular one is the backtracking line search algorithm
which depends on two constants, α & β . Gradient descent is an algorithm which uses the δx=-∇f(x), i.e. the direction is the direction of
maximum decrease of f(x). It uses the line search (exact or backtracking) to find the step-length to move in this direction. The performance of the algorithm
is dependent on the sublevel sets of f(x) near optimum. This presentation shows how to practically apply and solve the Backtracking Line Search Algorithm.

Backtracking
Line
Search,
Gradient
Descent,
Exact
Line
Search

Added: 1364 days ago by
admin
Views: 5372
Comments: 0

( Not yet rated )

Unconstrained Minimization: Theoretical Anlysis- Stopping Criterion & Condition Number

25:7

SANJEEV SHARMA: 2nd Dec 2010. CCO-10/11: P-001, Section-2: Unconstrained Minimization: Theoretical Analysis of Stopping Criterion & Condition Number.

Contents: Condition Number, Stopping Criterion, Strong Convexity.

Unconstrained minimization, as the name suggests is minimizing a function {f(x)} without any constraints. The only constraints are the implicit constraints on the
domain of the function to be minimized. These problems are solved through methods such as Descent Methods, Steepest Descent, Newtons Algorithm, Interior Point Methods etc. The line search algorithms involve an iterative solver. Therefore we need certain criterion for stopping the algorithm and also we need to predict the performance of the algorithm. These iterative solvers depend on the Condition Number of the Hessian of the objective (ω(∇²f(x))) near the optimal point and
the rate of convergence depends on the eccentricity of the condition number. Moreover, these algorithms use the norm of the gradient of the objective function
as a stopping criterion. This presentation provides a complete analysis of the derivation of the stopping criterion and discusses the condition number of
α-sublevel sets and relation between the ω(∇²f(x)).

Unconstrained
Minimization
Condition
Number
Convergence
Line
Search

Added: 1365 days ago by
admin
Views: 2545
Comments: 0

( Not yet rated )

General Mathematical Optimization

23:31

Sanjeev Sharma: 30th Nov 2010. CCO-10/11: P-001, Section-1: General Mathematical Optimization

Contents: Linear Programs, Least-Squares, Convex Programs, Nonlinear Programs and Relaxations.

Mathematical optimization problem involves minimizing (or maximizing) a mathematical
function with respect to an optimization variable with some constraints on the optimization variable.
There are several classes of optimization problems, for example Linear Programming, Least-Squares,
Convex Programming and General Nonlinear programming problems. Linear programs and Least-Squares problems can be solved reliably and efficiently.
Convex Programming problems involves Linear Programs and Least-Squares problems as a special case. Convex Programs can be solved efficiently by methods
such as interior point methods, subgradient, cutting plane and ellipsoid methods. Solving a general Nonlinear Programming (not linear and also not known
to be convex) problem is a challenging task as no specific approach is known. Therefore a general way to solve these problems is to find the local optimal
solution. Convex Optimization plays role in Nonlinear Programming Problems through finding the lower bounds. Several relaxation methods involve replacing
the nonlinear constraints with the loose convex constraints to find the lower bound on the NLP problems.

Optimization

Added: 1367 days ago by
admin
Views: 2561
Comments: 0

( Not yet rated )

Support Vector Machine & In-depth Convex Analysis

78:15

Sanjeev Sharma
23rd Nov 2010: Machine Learning: Lecture-12: MLR: Contents: Primal Hard Margin & Soft-Margin, Dual Hard Margin & Soft Margin, KKT-Conditions,
Lower Bounds, Lagrange Dual, Slater's Constraint Qualification, Weak & Strong Duality,
Complementary Slackness.

Description: SVM can be used both as a hard & soft margin classifier. Hard Margin is used in the cases where the dataset is separable and Soft-Margin is used for classifying the overlapping classes. The power of SVM comes from the DUAL formulation which utilizes the Kernel-Trick thus facilitating the basis construction. The sparsity of SVM comes from the Complementary Slackness which is one of the KKT-Conditions. The Slater's Constraint Qualification Condition is satisfied for the SVM since all the
constraints are affine and the objective function is convex. Hence Strong-Duality holds in the SVM.
Thus solving primal is equivalent to solving the Dual problem, since the optimal-duality gap is zero.
Lagrangian Dual explains the relationship between the Primal & Dual Problem. This lecture addresses
the complete Convex Analysis of the SVM, deriving the KKT-Conditions and explaining the Lagrange
Dual and Strong Duality. It also explains the Interpretation of Lagrange Multiplier and its importance.

SVM,
Lagrange
Dual,
KKT-Conditions,
Weak
&
Strong
Duality,
Slater's
Constraint
Qualification,
Complementary
Slackness.

Added: 1373 days ago by
admin
Views: 3950
Comments: 1

( Not yet rated )

Machine Learning: Perceptrons- Kernel Perceptron Learning

33:43

SANJEEV SHARMA
12th Nov 2010: Machine Learning: Lecture-11: Kernel Perceptron Learning.

CONTENTS: Simple Perceptron Algorithm Voted Perceptron Algorithm, Kernel Perceptron Algorithm.

DESCRIPTION: To solve a machine learning problem like classification and regression requires constructing basis functions. In general it's quite hard to determine what kind of basis functions will be able to perform well in the task at hand. Sometimes we may find that the polynomial function can perform well, but what should be the degree of the polynomial? Using kernels circumvent this problem. The cardinal advantage of using Kernels is that it obviates the necessity of constructing the basis functions explicitly. In this Lecture I address this issue and explain the simple Perceptron learning algorithm with linear basis functions and then voted version of the Perceptron learning algorithm, again with the linear basis functions. The voted version assigns weight to each of the weight vector that it encounters during the learning phase and then outputs the final weight vector that is the voted-sum of the weight vectors. However Perceptron can solve nonlinear problems by constructing the non-linear basis functions. But using the KERNEL PERCEPTRON algorithm obviates the need to construct the basis functions.

Kernel
Machines
Kernel
Trick
Perceptron
Kernel
Perceptron
Voted
Perceptron

Added: 1384 days ago by
admin
Views: 3316
Comments: 0

( Not yet rated )

Reinforcement Learning: Kernelized Value Function Approximation

29:34

SANJEEV SHARMA 8th Nov 2010: Reinforcement Learning: Phase-II, Presentation-4: ARL-10/11 - (Lecture-4). Kernelized Value Function Approximation.

CONTENTS: Kernel Methods, Kernel Based Regularized Least-Squares Regression, Kernelized Value Function Approximation.

DESCRIPTION: Kernel methods have been much utilized in machine learning problems. Kernel trick facilitates the classification problem by circumventing the need to construct the higher order basis function. A Kernel maps the feature space to a higher dimensional space, enabling the separation of the data set in the higher dimensional space. Reinforcement Learning methods such as GPRL and KLSTD had utilized the kernel trick. However, the algorithm, Kernelized Value Function Approximation, unifies these methods and provides a model based solution for approximating the state-value function. The algorithm can be used for prediction problems. It utilizes the Kernel based regularized least-squares regression approach to find the relation between the states and the corresponding expected, γ-discounted toal reward. It also utilizes this model to find the kernel for the next state. The value function finally results into the sum of a Geometric Progression (GP) involving the KERNEL matrices and provides the analytical solution for the approximation problem.

Kernel
Methods,
Kernel
Reinforcement
Learning,
Kernelized
Value
Function
Approximation

Added: 1388 days ago by
admin
Views: 1796
Comments: 0

( Not yet rated )

Reinforcement Learning: Geometric Analysis of BRM & Fixed-Point methods

22:51

SANJEEV SHARMA 5th Nov 2010: Reinforcement Learning: Phase-II, Presentation-3: ARL-10/11 - (Lecture-3): Geometric Analysis of Bellman Residual Minimization & Fixed-Point Methods.

CONTENTS: Bellman Residual Minimization, MDP, Fixed-Point Methods.

DESCRIPTION: Target of a control problem is to find an optimal control policy for a given task (domain/MDP).
Algorithm that finds this optimal policy is the Policy Iteration. PI is analogous to the EM-Algorithm.
It's first step, value determination step, involves computing the State-Action value function
for a given policy. The next step is the policy improvement step, which makes the next policy greedy
with respect to the value function of the previous policy. The value determination step, if using least-squares algorithms, can be solved either by Bellman Residual Minimization or the Fixed-Point methods. This lecture discusses the BRM and FP-methods and provides a geometrical interpretation of both these methods.
BRM & FP eventually minimize the hypotenuse & base of a right-Δ respectively.

Geometric-Analysis
BRM
Bellman
Residual
Minimization
Fixed-Point
Methods

Added: 1392 days ago by
admin
Views: 2195
Comments: 0

( Not yet rated )

Reinforcement Learning: Fixed-Point Estimate of State-Action Value Function & Least-Squares Policy Iteration

42:33

SANJEEV SHARMA : 2nd Nov 2010: Reinforcement Learning: Phase-II, Presentation-2: ARL-10/11 - (Lecture-2): Fixed-Point Estimation of State-Action Value Function & Least-Squares Policy Iteration.

CONTENTS: Fixed-Point Estimation of State-Action Value Function, Lease-Squares Policy Iteration (LSPI).

DESCRIPTION: LSPI is an off-policy algorithm. It's a modification over the LSTD. LSPI has an advantage over the LSTD algorithm, for it doesn't require the new samples to be collected for the computation of value function for each new policy. Being an off-policy algorithm, it can accept samples from any random policy. This provides the data-efficiency to the LSPI algorithm which was not possessed LSTD Algorithm. LSPI, however uses an algorithm, LSQ (LSTDQ) to compute the value function for a policy, which is actually the off-policy innovation in the LSPI algorithm. LSPI returns the Fixed-Point Solution which is discussed in detail in the lecture.

LSPI,
Fixed-Point
Solution,
Bellman
Operator,
Acrobot,
Chain-Walk-Domain

Added: 1395 days ago by
admin
Views: 2311
Comments: 0

( Not yet rated )

Reinforcement Learning: Least-Squares Temporal Difference Learning

17:21

SANJEEV SHARMA : 24th Oct 2010: REINFORCEMENT LEARNING: Phase-II, Presentation-1 (P2P1): Least-Squares Temporal Difference Learning.

CONTENTS:
Value Function, Value Function Approximation, Linear Function Approximation, TD Learning, LSTD algorithm.

DESCRIPTION:
LSTD Algorithm is a modification over the TD Learning methods. Though both of them are same in spirit for they solve for the fixed-point methods, they differ in the way they approach the solution. TD is an incremental and online algorithm whereas the LSTD is an offline and batch learning algorithm. TD solves for the expected TD-error=0 whereas LSTD directly computes the weight vector for which the expected TD-Update is = 0. In this lecture I have discussed the LSTD algorithm.

"
Least-Squares
Temporal
Difference
Learning",
LSTD,
"Reinforcement
Learning",
"Value
Function
Approximation",
"Linear
Function
Approximation"

Added: 1403 days ago by
admin
Views: 3385
Comments: 0

( Not yet rated )

Reinforcement Learning: Temporal Difference Learning

76:21

SANJEEV SHARMA : 21st March 2010: REINFORCEMENT LEARNING: Lecture - 5: TEMPORAL DIFFERENCE LEARNING.

CONTENTS:

Constant-alpha Monte Carlo; 1-Step Temporal Differencing; TD(0) for Prediction; Estimating Value Function using TD(0); SARSA On-Policy TD Control; Q-Learning Off-Policy TD Control; Actor-Critic Methods; R-Learning; Backup Diagrams for SARSA, Q-Learning, Monte Carlo and 1-Step TD(0).

DESCRIPTION:

In this lecture first of all I provided a very brief introduction to the Temporal Differencing {TD(0) or 1-Step TD(0)} Methods. Then I provided an overview of Constant-Alpha Monte Carlo & similarity with the Temporal Difference Learning. Then I discussed the prediction problem i.e. estimating the State-Value function using the 1-Step TD(0). Then I provided the Backup diagrams for Monte Carlo and TD(0) methods. Then I discussed SARSA which is an On-Policy TD Control Algorithm and then Q-Learning: Off-Policy TD Control Algorithm. Then I discussed in detail, why SARSA is On-policy and Q-Learning is an Off-Policy, using first the Backup Diagram and then through the Pseudo Code for both SARSA and Q-Learning. Then I discussed the Actor-Critic Methods. Finally I discussed the R-Learning Algorithm and then terminated. [:)]

PATH PLANNING:

In 2nd Presentation in PATH PLANNING, I provided a brief idea of what kind of Reinforcement Learning algorithms will help in the Ellipsoidal Constrained Agent Navigation. TD methods are appropriate for PATH PLANNING due to online and incremental learning ability. This lecture is again a supplementary material for Path Planning & Autonomous Navigation.

Reinforcement
Learning
Temporal
Difference
Learning
Monte
Carlo
SARSA
Q-Learning
R-Learning
Actor-Critic
Learning
1-Step
TD(0)

Added: 1620 days ago by
admin
Views: 2084
Comments: 0

(1 ratings)

Path Planning: Ellipsoidal Constrained Agent Navigation - Autonomous Navigation of UAVs & UGVs in Unknown Environments

50:10

Sanjeev Sharma : 11th March 2010: PATH PLANNING: PRESENTATION - 2, CRP - PRESENTATION - 1: Ellipsoidal Constrained Agent Navigation, Class: Convex-Reinforcement-Path,
Area: Path Planning for Autonomous Navigation of UGV in Urban Environment.

Project Home Page

DESCRIPTION:

This presentation was delivered in March 2010, with title "Path Planning: Ellipsoidal Constrained Agent Navigation - Autonomous Navigation of UGV in Urban Environments", when ECAN algorithm was in development phase. Later this algorithm was modified for UAV and UGV navigation in unseen environments. The presentation shows only point size robot (ground) navigation. For better understanding download the publication. The algorithm uses Convex-Quadratic Constrained Quadratic Program (QCQP), Semi-Definite Programming (SDP) and Second-Order Cone Programming (SOCP). SOCP handles finite and non-convex shaped robots.

Sanjeev Sharma, *QCQP-Tunneling: Ellipsoidal Constrained Agent Navigation*. In Proceedings of Second IASTED International Conference on Robotics, Nov 7-9, 2011, Pittsburgh, USA.

QCQP,
SDP,
SOCP,
Autonomous
Navigation,
Path
Planning,
Convex
Programming,
Continuous
Environments

Added: 1630 days ago by
admin
Views: 3205
Comments: 0

( Not yet rated )

Reinforcement Learning: Monte Carlo & Intro to Ellipsoidal Constrained Agent Navigation (Path Planning for UGV)

84:32

SANJEEV SHARMA : 4th March 2010: REINFORCEMENT LEARNING: Lecture - 4: MONTE CARLO & Introduction to Convex-Reinforcement-Path (Ellipsoidal Constrained Agent Navigation).

CONTENTS:
TALOS, Introduction to ELLIPSOIDAL METHODS for Path Planning, First & Every Visit Monte Carlo, Value function estimation, Problem of Infinite Episodes & Exploring Starts, Generalized Policy Iteration, Eliminating the assumption of infinite episodes, Monte Carlo ES, e-soft & e-greedy policies, Eliminating Exploring Starts, On-Policy & Off-Policy Monte Carlo, Estimating one policy while following another.

DESCRIPTION: In this video first of all I provided a brief introduction to ECAN, a path planning algorithm. Then I explained the need of Monte Carlo Methods (or estimation algorithms). Then I explained First & Every Visit Monte Carlo & then I explained Value Function estimation using Monte Carlo. Then I explained the problem of infinite episodes in evaluation step of the Generalized Policy Iteration (GPI), and then the concept behind Value Iteration algorithm for eliminating the need of infinite episodes in the evaluation step of the GPI. Then I discussed the Monte Carlo ES algorithm for eliminating the need of infinite episodes by using exploring starts. Then I discussed On-Policy Monte Carlo, eliminating both the assumptions. Then discussed an algorithm for use in Off-Policy Monte Carlo where one policy can be estimated while generating episodes from the other and requirement for this is independent of environment's dynamics & finally discussed Off-Policy Monte Carlo.

Reinforcement
Learning
Monte
Carlo
Ellipsoidal
Constrained
Agent
Navigation
Path
Planning
On-Policy
Monte
Carlo
Off-Policy
Monte
Carlo
e-soft
e-greedy
Exploring
Starts
GPI

Added: 1637 days ago by
admin
Views: 2087
Comments: 0

( Not yet rated )

Reinforcement Learning: Iterative Algorithms & Single Agent Path Planning in Static Environment under FOMDPs

43:37

SANJEEV SHARMA : 18th Jan 2010: REINFORCEMENT LEARNING: Lecture - 3: ITERATIVE ALGORITHMS & SINGLE AGENT PATH PLANNING IN FOMDPs. (Fully observable MDPs).

CONTENTS:

Optimal Value Functions, Bellman Optimality Equation, Relation b/w Optimal Action value function and Optimal State-Value Function, Policy Evaluation, Policy Iteration, Value Iteration, Policy Improvement, Agent Path Planning in Static Environment in FOMDPs.

DESCRIPTION:

In this lecture first of all I mentioned few things from the previous lecture. Then I provided an introduction to optimal policies, details about the relationship b/w the Optimal State Value Function and OPtimal Action Value Function. Then I mentioned the Bellman Optimality Equation for both the State-Value function and Action-Value Function. Then I provided a brief overview of my Example in Agent\'s Path Planning in Static Environment in Fully Observable MDPs. Then I provided the details of 4 most important algorithms i.e. Policy Evaluation, Policy Improvement, Policy Iteration and Value Iteration.

Date-23rd January 2011: The code I had written approximately an year ago for this lecture: Path_Planning_Policy_Evaluation_Sanjeev.zip. Just run Sanjeev_Main_Path.m .

Reinforcement
Learning
Policy
Evaluation
Policy
Improvement
Policy
Iteration
Value
Iteration
Bellman
Optimality
Equation
Single
Agent
Path
Planning
FOMDPs
Static
Environment

Added: 1682 days ago by
admin
Views: 1753
Comments: 0

( Not yet rated )

Machine Learning: Kullback-Leibler Divergence & Convex Analysis

37:49

SANJEEV SHARMA: 11th Jan 2010: MACHINE LEARNING: Lecture-10: Kullback-Leibler Divergence & Convex Analysis.

Contents: Convex Analysis, Perspective Functions, EPIGRAPH, Information Gain, Entropy, Discrete Entropy, Differential Entropy, Noiseless Coding Theory, Relation b/w entropy and information, Kullback-Leibler Divergence.

Description: In this lecture first I discussed about the Convex Analysis of the Kullback-Leibler Divergence b/w the two positive vectors. Kullback-Leibler Divergence b/w two Positive Vectors is sum of Relavite Entropy and Linear function of vectors where as when these vectors represent the probability then KL Divergence is just the relative entropy between the two distributions. So the first part of the lecture covers the Convex Interpretation, EPIGRAPHS, Perspective Functions & Convex Analysis of KL Divergence. I discussed each term briefly in the lecture. Then in the 2nd part of the lecture I discussed about the Information Gain Theroy and Machine Learning. First I discussed about the Information associated with observing state of particular discrete random variable and then the Entropy associated with the distribution. I discussed about the maximum entropy in case of discrete random variable, i.e.the uniform distributions, then gave the relation b/w entropy and information. Once this is done I discussed about the DIFFERENTIAL ENTROPY for Continuous Random variables. Then I provided the Mathematical Expression of Kullback-Leibler Divergence.

Machine
Learning
Kullback
Leibler
Divergence
Perspective
Function
Epigraph
Information
Gain
Entropy
Differemtial
Entropy
Discrete
Entropy
Relative
Entropy

Added: 1689 days ago by
admin
Views: 2896
Comments: 0

( Not yet rated )

Reinforcement Learning: Value Functions and Markov Property

33:57

SANJEEV SHARMA: 5th Jan 2010: REINFORCEMENT LEARNING: Lecture-2: Value Functions and Markov Property. In this lecture I discussed about the Eposidic & Continual Tasks. I also discussed about the State, Rewards, Returns, Discounted Return and Agent Environment Interaction Process. I also provided the details about the Discounting Parameter and proved that the Expected Return is Finite through Discounting. Then I also discussed about the Kind of value functions i.e. state value function and action value function of a policy. I also derived the expression for State-Value Function for a policy and provided the interpretation of each term involved in the BELLMAN Equation. I also provided a very brief introduction to MARKOV PROPERTY, MARKOV STATES and MDPs. More details about the BELLMAN Equation and MDPs will be discussed in Lecture 3. Much of the terms like Bellman Optimality Equation and relation between State-Value and Action Value Function is skipped in this lecture as this will form the topic of discussion in lecture 3.

Reinforcement
Learning
Value
Functions
Bellman
Equation
Markov
Property

Added: 1692 days ago by
admin
Views: 2201
Comments: 0

( Not yet rated )

MACHINE LEARNING:Agglomerative Hierarchical Clustering-BIC

13:26

SANJEEV SHARMA: 25th December 2009. Lecture-9: Agglomerative Hierarchical Clustering using Bayesian Information Criterion. Bayesian Information Criterion or BIC in short is a Hypothesis Testing Algorithm just like the Kullback Leibler Divergence. BIC is a Parametric Model and assumes the Gaussian Distribution over the Dataset. In Agglomerative Hierarchical Clustering we do bottom-up clustering. In this lecture I mentioned how to use BIC Score for Cluster Set as a decision criteria for merging the clusters. At each level we evaluate the BIC Score and then we test for merging clusters and again calculate the BIC Score at next higher level. If the difference b/w higher level and lower level is greater than zero then the two clusters can be merged. We follow this procedure to merge clusters and can also use it to find most appealing clusters that can be merged.

Hierarchical
Clustering
Bayesian
Information
Criterion
BIC

Added: 1706 days ago by
admin
Views: 2283
Comments: 0

( Not yet rated )

Path Planning: Ellipsoidal Surfaces & MVE.

14:20

SANJEEV SHARMA: 23rd Novomber 2009. Presentation-1: Ellipsoidal Constrained Agent Navigation

Contents: Least Squares, SVM, Quadratic Constraints, Ellipsoids, Quadratic Discriminating Surface, Lowner-John Ellipsoid, Ellipsoidal Constraints, Semi-Definite Programming, Outlier Rejection.

Description: This presentation is not directly on path planning but it is intended to give you an overview of two methods known as Ellipsoidal Surfaces and Lowner-John (Minimum Volume) Ellipsoids. Ellipsoidal surface is designed by setting constraint on your P matrix and quadratic expression. Minimum Volume Ellipsoid of a set C, is the ellipsoid covering the Convex Hull of set C. Therefore finding a minimum volume ellipsoid can be cast as a Convex Problem. Finding Ellipsoidal Surface can be cast as a Feasibility SDP Problem. This lecture will give you an overview of how to combine the two algorithms and use for outlier detection and discrimination of two classes and will form the base for upcoming lectures. In the first part itself, I provided 2 algorithms which will be discussed later i.e. Least Squares and SVM Formulations.

Convex
Optimization
Minimum
Volume
Ellipsoids
Ellipsoidal
Surfaces

Added: 1709 days ago by
admin
Views: 1949
Comments: 0

( Not yet rated )

Pentesting Part 2 - Getting GUI access with the VNC Payload

4:12

By Prateek on 15th Dec, In this video i demonstrate the use of the VNC payload which helps us in getting GUI access on the victim machine.The VNC payload makes use of the technique called Reflective DLL Injection .Reflective DLL Injection is a technique whereby a stage payload is injected into a compromised host process running in memory, never touching the host hard drive. The VNC and Meterpreter payloads both make use of reflective DLL injection.

exploitation
metasploit
VNC

Added: 1716 days ago by
admin
Views: 1338
Comments: 0

( Not yet rated )

Pentesting Part 1 - Using Metasploit to own a Box

4:34

By Prateek on 15th dec, In this video i demonstrate a very simple usage of the Metasploit Framework ,a tool for developing and executing exploit code against a remote target machine. The basic aim of this video is to demostrate how an unpatched windows Box could be compromised easily by a malicious hacker.
1. Choosing and configuring an exploit (code that enters a target system by taking advantage of one of its bugs; about 300 different exploits for Windows, Unix/Linux and Mac OS X systems are included);
2. Checking whether the intended target system is susceptible to the chosen exploit (optional);
3. Choosing and configuring a payload (code that will be executed on the target system upon successful entry, for instance a remote shell or a VNC server);
4. Choosing the encoding technique to encode the payload so that the Intrusion-prevention system will not catch the encoded payload;
5. Executing the exploit.

metasploit
hacking
exploitation

Added: 1716 days ago by
admin
Views: 1344
Comments: 0

( Not yet rated )

Hacking Into a system Using Fast-Track

5:41

By Prateek Gianchandani on Dec 15 ,2009 >> Fast-Track is a python based open-source project aimed at helping Penetration Testers in an effort to identify, exploit, and further penetrate a network. It was released By David Kennedy on Shmoocon 2009. In this video i demonstrate how one can easily compromise an unpatched system using Fast-track. Basically what it does is scan the system for open ports using Nmap, and then use Metasploit Autopwn to launch attacks against the system. We will be discussing all the concepts in Penetration Testing from scratch . So look out for more videos in the same channel.

Pentesting
hacking
Autopwning

Added: 1717 days ago by
admin
Views: 2183
Comments: 0

( Not yet rated )

Machine Learning: Discrimination

7:4

SANJEEV SHARMA: 10th Nov 2009. Lecture-8 Machine Learning - Linear Discrimination. In this lecture I first mentioned about the concepts of Separating Hyperplane, then I provided 3 different Data Sets and shown the Discrimination via Least Squares Discrimination, L1 Norm Fitting and Support Vector Machines. I provided the results of Robustness of LS Discrimination which is not very robust classifier. I also provided the evidence of outliers and distant data points. I also provided the results of using SVM.

Least
squares
discrimination
SVM
L1
Norm

Added: 1751 days ago by
admin
Views: 1626
Comments: 0

( Not yet rated )

Machine Learning: Non Linear Regression

15:8

SANJEEV SHARMA: 3rd November 2009. Lecture 7 in Machine Learning. Non-Linear Regression. In this lecture I explained the non-linear basis function. Then I also explained the underlying concept of linear dependency on parameters. Then I derived the analytical solution and discussed the concept of under and over fitting. Then I provided the hint of L1 Norm Regularization.

Non-Linear
Regression
Machine
Learning

Added: 1759 days ago by
admin
Views: 1833
Comments: 0

( Not yet rated )

Reinforcement Learning: Introduction

6:39

SANJEEV SHARMA>Lecture -1:Reinforcement Learning. 26th Oct 2009. "Introduction To Reinforcement Learning". Reinforcement Learning (RL) is learning through experience. This video gives you a very brief introduction to RL. In this video I told about the goal and Ultimate Aim of this Channel. I also informed about the target vehicles of DARPA Urban Challenge 2007 that we will focus upon. I also demonstrated the basic elements of Reinforcement Learning.

Reinforcement
Learning

Added: 1761 days ago by
admin
Views: 1596
Comments: 0

( Not yet rated )

Machine Learning: Gaussian Discriminant Analysis

9:15

SANJEEV SHARMA>> Lecture 6 - Machine Learning.17th October. Gaussian Discriminant Analysis is a class of broader family of algorithms known as Generative Learning Algorithms. In this video I will show the Bernoulli Distribution of target variables and the Multivariate Gaussian Distribution of feature vectors. Then I provided joint-log likelihood overview and then demonstrated the results of argmax over parameters of the JLL. Trade off b/w logistic and GDA will be discussed separately in the lecture 9

Generative
Learning
Algorithms
Gaussian
Discriminant
Analysis
Multivariate
Gaussian
Bernoulli
Joint
Log
Likelihood

Added: 1775 days ago by
admin
Views: 2859
Comments: 0

( Not yet rated )

Kismet - A Wardriving tool

6:32

By Prateek Gianchandani on Oct 16
Kismet is an 802.11 layer2 wireless network detector, sniffer, and intrusion detection system. Kismet will work with any wireless card which supports raw monitoring (rfmon) mode, and can sniff 802.11b, 802.11a, and 802.11g traffic. Kismet identifies networks by passively collecting packets and detecting standard named networks, detecting (and given time, decloaking) hidden networks, and infering the presence of nonbeaconing networks via data traffic.

kismet
wardriving
sniffer

Added: 1776 days ago by
admin
Views: 1317
Comments: 0

( Not yet rated )

Socket Programming In python Part-3

4:56

By Prateek Gianchandani on Oct 15

In this video i will show you how to create a Tcp-client using Sockets. The client will first connect to the server on the specified port and then flag a message confirming that the client has connected to the server.

Programming python socket

Added: 1777 days ago by admin Views: 1638 Comments: 0

( Not yet rated )

Socket Programming In python Part-2

5:13

By Prateek Gianchandani on Oct 15
In the second part of python Programming i am going to show you how to code a socket server which is capable of receiving connections and then replying back with a message. Please note that for this socket server we need to allow that port through our firewall.

Programming
python
socket

Added: 1778 days ago by
admin
Views: 1333
Comments: 0

( Not yet rated )

Socket Programming In python Part-1

4:11

By Prateek Gianchandani on Oct 15 \'09
A socket is the endpoint of a communication as it is used to send and receive data . In the first of Socket Programming i am going to tell about the basics of socket programming in the python platform .

Programming
python
socket

Added: 1778 days ago by
admin
Views: 1542
Comments: 0

( Not yet rated )

Putting Wireless Interface Into monitor mode

3:12

By Prateek on 12th Oct,09
Putting your wireless card into monitor mode allows you to monitor the traffic without associating to any access point. This is different from promiscous mode which is for both wired and wireless networks. In this video i show you how to put your wireless card into monitor mode.

Monitor
mode
interface
wireless

Added: 1781 days ago by
admin
Views: 1958
Comments: 0

( Not yet rated )

Machine Learning:Exponential Family Distribution & Suff Statistics

19:38

Sanjeev Sharma> This is 5th Lecture in ML. Exponential Family Distribution and Sufficient Statistics. In this lecture I will cover the general distribution family i.e. exponential. Almost every distribution can be represented in the form of Exponential Family Distribution. In the first part, I will show you the step wise derivation of SUFFICIENT STATISTICS, then I will show you how to represent GAUSSIAN and BERNOULLI Distributions in the form of exponential family distribution.

Machine
Learning
Exponential
Family
Distribution
Sufficient
Statistics
Gaussian
Bernoulli
Distribution

Added: 1789 days ago by
admin
Views: 2057
Comments: 0

( Not yet rated )

Machine Learning: Probabilistic Interpretation of Least-Squares

14:14

Sanjeev Sharma>This is 4th Lec in ML. In this lecture I presented the Probabilistic Interpretation Of Least Squares Regression. I explained the reason behind choosing least squares error function in the regression problem in Machine Learning. For this we assume the Gaussian (Normal) distribution of the error terms. Later part covers the relation b/w maximum likelihood and least squares. (I also provided the hint of results, that you will get if you use other distribution like Poisson and Laplacian, but this is a topic of Nu. Optimization therefore is not discussed in this lecture & will be discussed in Optimization Channel)

Machine
Learning
Maximum
Log
Likelihood
Probabilistic
Interpretation
Least
Squares
Optimization

Added: 1790 days ago by
admin
Views: 1776
Comments: 0

( Not yet rated )

Machine Learning: Logistic Regression.

25:38

Sanjeev Sharma>This is 3rd Lec in ML. In this lecture I present Logistic Regression which is a class of exponential family function. It is often used in classification. In this lecture I explain the need of Logistics. In later part of the lecture I will cover the Gradient Ascent and Newton\'s Method for finding the maximum log likelihood. In general, Newton\'s Method works quite well for Machine Learning Logistic Reg. therefore the case where Hessian is ill conditioned is not covered and is a topic of Numerical Optimization and will be covered in Optimization Channel when I discuss the INEXACT Newton\'s Method where Approximate Hessian rather than Hessian is used for updating parameters.

Machine
Learning
Logistic
Regression
Sigmoid
Gradient
Descent
Gradient
Ascent
Newton
Inexact
Newton
Method

Added: 1791 days ago by
admin
Views: 11657
Comments: 0

( Not yet rated )

Machine Learning: Locally Weighted Regression

7:37

Sanjeev Sharma> This is 2nd lecture in ML. Locally weighted regression is classified as Non-Parametric Algorithm. In machine learning & Optimization, locally weighted regression (LWR) is used to locally fit a curve to the data set. A weighting function is used to tell the algorithm, which instances to focus upon.

Machine
Learning
Optimization
Locally
weighted
regression
least
squares
fitting

Added: 1792 days ago by
admin
Views: 2558
Comments: 0

( Not yet rated )

Machine Learning: Linear Regression.

19:39

Sanjeev Sharma. This is 1st lecture in Machine Learning Channel. The lecture is about linear regression, explaining the way to fit a line to the data, using least squares error function. (The tackling of problem with outliers is not discussed in this, will be discussed in upcoming lectures and especially in Optimization)

Machine
Learning
Linear
Regression
Least
Squares

Added: 1798 days ago by
admin
Views: 2415
Comments: 1

(2 ratings)

Audio Reconstruction -An Optimization Approach

9:16

Sanjeev Sharma> This is MY OWN PROJECT. Audio Reconstruction, more specifically signal reconstruction, is a field of active research. In this method a signal is enhanced by removing the noise from signal (or may depend on application). Feature preservation is a very important aspect of signal reconstruction. In this video I present a method of audio reconstruction.

Audio
Reconstruction
Signal
Reconstruction
Regularization
Least
Squares
L1
Norm
Optimization

Added: 1802 days ago by
admin
Views: 1569
Comments: 0

(2 ratings)

Hand/Finger Detection Using Image Processing

12:49

Sanjeev Sharma>>This was my own project. In this video, I demonstrate a very simple and easy to apply algorithm for hand/finger detection using basic tools/methods of image processing. I will also demonstrate the results obtained after each step of the algorithm.

Image
Processing
Hand
detection
morphology

Added: 1806 days ago by
admin
Views: 3591
Comments: 0

(1 ratings)

Deauthentication attacks

3:31

Prateek Gianchandani> Deauthentication attacks are basically used to disconnect a client from an access point .These kinds of attacks will be useful while cracking Wpa because it allows an attacker to capture the 4 way handshake that occurs during reassociation.

deauthentication
aireplay

Added: 1806 days ago by
admin
Views: 1955
Comments: 0

( Not yet rated )

Etherape

1:6

Prateek>EtherApe is a graphical network monitor for Unix modeled after etherman. Featuring link layer, ip and TCP modes, it displays network activity graphically. Hosts and links change in size with traffic. Color coded protocols display.
It supports Ethernet, FDDI, Token Ring, ISDN, PPP and SLIP devices. It can filter traffic to be shown, and can read traffic from a file as well as live from the network.

etherape
network
traffic

Added: 1815 days ago by
admin
Views: 1759
Comments: 0

( Not yet rated )

Nmap Part 2 (Port scanning)

8:21

Prateek>In this video we show one can use Nmap To find out what ports are open on a specific computer. We demonstrate different types of port scans and then compare the results to find out more detailed information.

Nmap
port
scanning
hosts

Added: 1822 days ago by
admin
Views: 1842
Comments: 0

( Not yet rated )

Lecture 2- Line Search Methods Part 3

9:55

By Sanjeev>> Line Search is one of the ways of solving the problems of Unconstrained Optimization. In this lecture I will discuss about LS algorithm and then the Wolfe & Strong Wolfe conditions and their practical meaning through Matlab. Then I will prove the fact about step-length and derive necessary conditions for our function.

line
search
wolfe

Added: 1839 days ago by
admin
Views: 1870
Comments: 0

(1 ratings)

Machine Learning: Linear Regression.

19:39

Sanjeev Sharma. This is 1st lecture in Machine Learning Channel. The lecture is about linear regression, explaining the way to fit a line to the data, using least squares error function. (The tackling of problem with outliers is not discussed in this, will be discussed in upcoming lectures and especially in Optimization)

Machine
Learning
Linear
Regression
Least
Squares

Added: 1798 days ago by
admin
Views: 2415
Comments: 1

(2 ratings)

Reinforcement Learning: Temporal Difference Learning

76:21

SANJEEV SHARMA : 21st March 2010: REINFORCEMENT LEARNING: Lecture - 5: TEMPORAL DIFFERENCE LEARNING.

CONTENTS:

Constant-alpha Monte Carlo; 1-Step Temporal Differencing; TD(0) for Prediction; Estimating Value Function using TD(0); SARSA On-Policy TD Control; Q-Learning Off-Policy TD Control; Actor-Critic Methods; R-Learning; Backup Diagrams for SARSA, Q-Learning, Monte Carlo and 1-Step TD(0).

DESCRIPTION:

In this lecture first of all I provided a very brief introduction to the Temporal Differencing {TD(0) or 1-Step TD(0)} Methods. Then I provided an overview of Constant-Alpha Monte Carlo & similarity with the Temporal Difference Learning. Then I discussed the prediction problem i.e. estimating the State-Value function using the 1-Step TD(0). Then I provided the Backup diagrams for Monte Carlo and TD(0) methods. Then I discussed SARSA which is an On-Policy TD Control Algorithm and then Q-Learning: Off-Policy TD Control Algorithm. Then I discussed in detail, why SARSA is On-policy and Q-Learning is an Off-Policy, using first the Backup Diagram and then through the Pseudo Code for both SARSA and Q-Learning. Then I discussed the Actor-Critic Methods. Finally I discussed the R-Learning Algorithm and then terminated. [:)]

PATH PLANNING:

In 2nd Presentation in PATH PLANNING, I provided a brief idea of what kind of Reinforcement Learning algorithms will help in the Ellipsoidal Constrained Agent Navigation. TD methods are appropriate for PATH PLANNING due to online and incremental learning ability. This lecture is again a supplementary material for Path Planning & Autonomous Navigation.

Reinforcement
Learning
Temporal
Difference
Learning
Monte
Carlo
SARSA
Q-Learning
R-Learning
Actor-Critic
Learning
1-Step
TD(0)

Added: 1620 days ago by
admin
Views: 2084
Comments: 0

(1 ratings)

Audio Reconstruction -An Optimization Approach

9:16

Sanjeev Sharma> This is MY OWN PROJECT. Audio Reconstruction, more specifically signal reconstruction, is a field of active research. In this method a signal is enhanced by removing the noise from signal (or may depend on application). Feature preservation is a very important aspect of signal reconstruction. In this video I present a method of audio reconstruction.

Audio
Reconstruction
Signal
Reconstruction
Regularization
Least
Squares
L1
Norm
Optimization

Added: 1802 days ago by
admin
Views: 1569
Comments: 0

(2 ratings)

Hand/Finger Detection Using Image Processing

12:49

Sanjeev Sharma>>This was my own project. In this video, I demonstrate a very simple and easy to apply algorithm for hand/finger detection using basic tools/methods of image processing. I will also demonstrate the results obtained after each step of the algorithm.

Image
Processing
Hand
detection
morphology

Added: 1806 days ago by
admin
Views: 3591
Comments: 0

(1 ratings)

Lecture 1: Introduction to Mathematical Optimization Part 2

9:0

By Sanjeev >>> 10th Aug 2009 ......This is part 2 of the series of videos on Mathematical optimization . In this video I give an overview of general Mathematical Optimization Problems and Necessary condition for minimizer

optimization
line
search
trust
region
methods

Added: 1843 days ago by
admin
Views: 1801
Comments: 0

( Not yet rated )

Netcat

9:49

By prateek >> 3rd august 2009
Netcat is a command line utility which can be used to read or write data over network connections. It can be used to create backdoors , listen on ports , perform proxy, perform port scans etc . It is also called the the "Swiss Army knife" of Tcp-Ip because of its wide variety of uses. . It comes preinstalled with backtrack .

netcat
backdoors
port
scanner

Added: 1850 days ago by
admin
Views: 1614
Comments: 0

( Not yet rated )

Macchanger

3:41

In this episode we discuss on a utility known as Macchanger which is used to change the Mac address of the interface. This would help us to spoof our identity while performing packet injection or some attacks over a network. We can set the MAC address to any specified value. Macchanger works on linux and is preinstalled in Backtrack.

macchanger
mac
address

Added: 1851 days ago by
admin
Views: 1523
Comments: 0

( Not yet rated )

Etherape

1:6

Prateek>EtherApe is a graphical network monitor for Unix modeled after etherman. Featuring link layer, ip and TCP modes, it displays network activity graphically. Hosts and links change in size with traffic. Color coded protocols display.
It supports Ethernet, FDDI, Token Ring, ISDN, PPP and SLIP devices. It can filter traffic to be shown, and can read traffic from a file as well as live from the network.

etherape
network
traffic

Added: 1815 days ago by
admin
Views: 1759
Comments: 0

( Not yet rated )

Lecture 1: Introduction to Mathematical Optimization Part 1

7:43

By Sanjeev >>> 10th Aug 2009 ......This is part 1 of the series of videos on Mathematical optimization . In this video I give an overview of general Mathematical Optimization Problems and Necessary condition for minimizer

optimization
line
search
trust
region
methods

Added: 1843 days ago by
admin
Views: 2095
Comments: 0

( Not yet rated )

Lecture 1: Introduction to Mathematical Optimization Part 3

5:8

By Sanjeev >>> 10th Aug 2009 ......This is part 3 of the series of videos on Mathematical optimization . In this video I give an overview of general Mathematical Optimization Problems and Necessary condition for minimizer

optimization
line
search
trust
region
methods

Added: 1843 days ago by
admin
Views: 1475
Comments: 0

( Not yet rated )

Arp spoofing using Arpspoof

3:54

ARP spoofing is an effective way to intercept, sniff, hijack and DoS connections. It is a more effective way of hijacking sessions, because it allows attackers to see incoming and outgoing communications, as if they were a proxy, as opposed to \\

Arp
spoofing
arpspoof

Added: 1842 days ago by
admin
Views: 1776
Comments: 0

( Not yet rated )

Lecture 2-Line Search Methods Part 1

8:51

By Sanjeev>> Line Search is one of the ways of solving the problems of Unconstrained Optimization. In this lecture I will discuss about LS algorithm and then the Wolfe & Strong Wolfe conditions and their practical meaning through Matlab. Then I will prove the fact about step-length and derive necessary conditions for our function.

Line
Search
Step
Length
optimization
wolfe
strong
wolfe

Added: 1839 days ago by
admin
Views: 1817
Comments: 0

( Not yet rated )

Lecture 2-Line Search Methods Part 2

9:57

By Sanjeev>> Line Search is one of the ways of solving the problems of Unconstrained Optimization. In this lecture I will discuss about LS algorithm and then the Wolfe & Strong Wolfe conditions and their practical meaning through Matlab. Then I will prove the fact about step-length and derive necessary conditions for our function.

wolfe
line
search
optimization

Added: 1839 days ago by
admin
Views: 1768
Comments: 0

( Not yet rated )

DATA MINING Introduction

2:31

By Sanjeev>> This video demonstrates, what actually Data Mining is and why do we need to study it?

Data
Mining

Added: 1832 days ago by
admin
Views: 1801
Comments: 0

( Not yet rated )

Nmap Part 1 (network mapping )

7:6

Prateek>In this video I show how Nmap can be used to map a network and display the various hosts that are up on a network. I also show some techniques which can be used to speed up the working of Nmap.

Nmap
port
scanning
hosts

Added: 1822 days ago by
admin
Views: 1875
Comments: 0

( Not yet rated )

Nmap Part 2 (Port scanning)

8:21

Prateek>In this video we show one can use Nmap To find out what ports are open on a specific computer. We demonstrate different types of port scans and then compare the results to find out more detailed information.

Nmap
port
scanning
hosts

Added: 1822 days ago by
admin
Views: 1842
Comments: 0

( Not yet rated )

Deauthentication attacks

3:31

Prateek Gianchandani> Deauthentication attacks are basically used to disconnect a client from an access point .These kinds of attacks will be useful while cracking Wpa because it allows an attacker to capture the 4 way handshake that occurs during reassociation.

deauthentication
aireplay

Added: 1806 days ago by
admin
Views: 1955
Comments: 0

( Not yet rated )

Machine Learning: Locally Weighted Regression

7:37

Sanjeev Sharma> This is 2nd lecture in ML. Locally weighted regression is classified as Non-Parametric Algorithm. In machine learning & Optimization, locally weighted regression (LWR) is used to locally fit a curve to the data set. A weighting function is used to tell the algorithm, which instances to focus upon.

Machine
Learning
Optimization
Locally
weighted
regression
least
squares
fitting

Added: 1792 days ago by
admin
Views: 2558
Comments: 0

( Not yet rated )

Machine Learning: Logistic Regression.

25:38

Sanjeev Sharma>This is 3rd Lec in ML. In this lecture I present Logistic Regression which is a class of exponential family function. It is often used in classification. In this lecture I explain the need of Logistics. In later part of the lecture I will cover the Gradient Ascent and Newton\'s Method for finding the maximum log likelihood. In general, Newton\'s Method works quite well for Machine Learning Logistic Reg. therefore the case where Hessian is ill conditioned is not covered and is a topic of Numerical Optimization and will be covered in Optimization Channel when I discuss the INEXACT Newton\'s Method where Approximate Hessian rather than Hessian is used for updating parameters.

Machine
Learning
Logistic
Regression
Sigmoid
Gradient
Descent
Gradient
Ascent
Newton
Inexact
Newton
Method

Added: 1791 days ago by
admin
Views: 11657
Comments: 0

( Not yet rated )

Machine Learning: Probabilistic Interpretation of Least-Squares

14:14

Sanjeev Sharma>This is 4th Lec in ML. In this lecture I presented the Probabilistic Interpretation Of Least Squares Regression. I explained the reason behind choosing least squares error function in the regression problem in Machine Learning. For this we assume the Gaussian (Normal) distribution of the error terms. Later part covers the relation b/w maximum likelihood and least squares. (I also provided the hint of results, that you will get if you use other distribution like Poisson and Laplacian, but this is a topic of Nu. Optimization therefore is not discussed in this lecture & will be discussed in Optimization Channel)

Machine
Learning
Maximum
Log
Likelihood
Probabilistic
Interpretation
Least
Squares
Optimization

Added: 1790 days ago by
admin
Views: 1776
Comments: 0

( Not yet rated )

Machine Learning:Exponential Family Distribution & Suff Statistics

19:38

Sanjeev Sharma> This is 5th Lecture in ML. Exponential Family Distribution and Sufficient Statistics. In this lecture I will cover the general distribution family i.e. exponential. Almost every distribution can be represented in the form of Exponential Family Distribution. In the first part, I will show you the step wise derivation of SUFFICIENT STATISTICS, then I will show you how to represent GAUSSIAN and BERNOULLI Distributions in the form of exponential family distribution.

Machine
Learning
Exponential
Family
Distribution
Sufficient
Statistics
Gaussian
Bernoulli
Distribution

Added: 1789 days ago by
admin
Views: 2057
Comments: 0

( Not yet rated )

Putting Wireless Interface Into monitor mode

3:12

By Prateek on 12th Oct,09
Putting your wireless card into monitor mode allows you to monitor the traffic without associating to any access point. This is different from promiscous mode which is for both wired and wireless networks. In this video i show you how to put your wireless card into monitor mode.

Monitor
mode
interface
wireless

Added: 1781 days ago by
admin
Views: 1958
Comments: 0

( Not yet rated )

Socket Programming In python Part-1

4:11

By Prateek Gianchandani on Oct 15 \'09
A socket is the endpoint of a communication as it is used to send and receive data . In the first of Socket Programming i am going to tell about the basics of socket programming in the python platform .

Programming
python
socket

Added: 1778 days ago by
admin
Views: 1542
Comments: 0

( Not yet rated )

Socket Programming In python Part-2

5:13

By Prateek Gianchandani on Oct 15
In the second part of python Programming i am going to show you how to code a socket server which is capable of receiving connections and then replying back with a message. Please note that for this socket server we need to allow that port through our firewall.

Programming
python
socket

Added: 1778 days ago by
admin
Views: 1333
Comments: 0

( Not yet rated )

Socket Programming In python Part-3

4:56

By Prateek Gianchandani on Oct 15

In this video i will show you how to create a Tcp-client using Sockets. The client will first connect to the server on the specified port and then flag a message confirming that the client has connected to the server.

Programming python socket

Added: 1777 days ago by admin Views: 1638 Comments: 0

( Not yet rated )

Kismet - A Wardriving tool

6:32

By Prateek Gianchandani on Oct 16
Kismet is an 802.11 layer2 wireless network detector, sniffer, and intrusion detection system. Kismet will work with any wireless card which supports raw monitoring (rfmon) mode, and can sniff 802.11b, 802.11a, and 802.11g traffic. Kismet identifies networks by passively collecting packets and detecting standard named networks, detecting (and given time, decloaking) hidden networks, and infering the presence of nonbeaconing networks via data traffic.

kismet
wardriving
sniffer

Added: 1776 days ago by
admin
Views: 1317
Comments: 0

( Not yet rated )

Machine Learning: Gaussian Discriminant Analysis

9:15

SANJEEV SHARMA>> Lecture 6 - Machine Learning.17th October. Gaussian Discriminant Analysis is a class of broader family of algorithms known as Generative Learning Algorithms. In this video I will show the Bernoulli Distribution of target variables and the Multivariate Gaussian Distribution of feature vectors. Then I provided joint-log likelihood overview and then demonstrated the results of argmax over parameters of the JLL. Trade off b/w logistic and GDA will be discussed separately in the lecture 9

Generative
Learning
Algorithms
Gaussian
Discriminant
Analysis
Multivariate
Gaussian
Bernoulli
Joint
Log
Likelihood

Added: 1775 days ago by
admin
Views: 2859
Comments: 0

( Not yet rated )

Machine Learning: Non Linear Regression

15:8

SANJEEV SHARMA: 3rd November 2009. Lecture 7 in Machine Learning. Non-Linear Regression. In this lecture I explained the non-linear basis function. Then I also explained the underlying concept of linear dependency on parameters. Then I derived the analytical solution and discussed the concept of under and over fitting. Then I provided the hint of L1 Norm Regularization.

Non-Linear
Regression
Machine
Learning

Added: 1759 days ago by
admin
Views: 1833
Comments: 0

( Not yet rated )

Reinforcement Learning: Introduction

6:39

SANJEEV SHARMA>Lecture -1:Reinforcement Learning. 26th Oct 2009. "Introduction To Reinforcement Learning". Reinforcement Learning (RL) is learning through experience. This video gives you a very brief introduction to RL. In this video I told about the goal and Ultimate Aim of this Channel. I also informed about the target vehicles of DARPA Urban Challenge 2007 that we will focus upon. I also demonstrated the basic elements of Reinforcement Learning.

Reinforcement
Learning

Added: 1761 days ago by
admin
Views: 1596
Comments: 0

( Not yet rated )

MACHINE LEARNING:Agglomerative Hierarchical Clustering-BIC

13:26

SANJEEV SHARMA: 25th December 2009. Lecture-9: Agglomerative Hierarchical Clustering using Bayesian Information Criterion. Bayesian Information Criterion or BIC in short is a Hypothesis Testing Algorithm just like the Kullback Leibler Divergence. BIC is a Parametric Model and assumes the Gaussian Distribution over the Dataset. In Agglomerative Hierarchical Clustering we do bottom-up clustering. In this lecture I mentioned how to use BIC Score for Cluster Set as a decision criteria for merging the clusters. At each level we evaluate the BIC Score and then we test for merging clusters and again calculate the BIC Score at next higher level. If the difference b/w higher level and lower level is greater than zero then the two clusters can be merged. We follow this procedure to merge clusters and can also use it to find most appealing clusters that can be merged.

Hierarchical
Clustering
Bayesian
Information
Criterion
BIC

Added: 1706 days ago by
admin
Views: 2283
Comments: 0

( Not yet rated )

Machine Learning: Discrimination

7:4

SANJEEV SHARMA: 10th Nov 2009. Lecture-8 Machine Learning - Linear Discrimination. In this lecture I first mentioned about the concepts of Separating Hyperplane, then I provided 3 different Data Sets and shown the Discrimination via Least Squares Discrimination, L1 Norm Fitting and Support Vector Machines. I provided the results of Robustness of LS Discrimination which is not very robust classifier. I also provided the evidence of outliers and distant data points. I also provided the results of using SVM.

Least
squares
discrimination
SVM
L1
Norm

Added: 1751 days ago by
admin
Views: 1626
Comments: 0

( Not yet rated )

Pentesting Part 2 - Getting GUI access with the VNC Payload

4:12

By Prateek on 15th Dec, In this video i demonstrate the use of the VNC payload which helps us in getting GUI access on the victim machine.The VNC payload makes use of the technique called Reflective DLL Injection .Reflective DLL Injection is a technique whereby a stage payload is injected into a compromised host process running in memory, never touching the host hard drive. The VNC and Meterpreter payloads both make use of reflective DLL injection.

exploitation
metasploit
VNC

Added: 1716 days ago by
admin
Views: 1338
Comments: 0

( Not yet rated )

Path Planning: Ellipsoidal Surfaces & MVE.

14:20

SANJEEV SHARMA: 23rd Novomber 2009. Presentation-1: Ellipsoidal Constrained Agent Navigation

Contents: Least Squares, SVM, Quadratic Constraints, Ellipsoids, Quadratic Discriminating Surface, Lowner-John Ellipsoid, Ellipsoidal Constraints, Semi-Definite Programming, Outlier Rejection.

Description: This presentation is not directly on path planning but it is intended to give you an overview of two methods known as Ellipsoidal Surfaces and Lowner-John (Minimum Volume) Ellipsoids. Ellipsoidal surface is designed by setting constraint on your P matrix and quadratic expression. Minimum Volume Ellipsoid of a set C, is the ellipsoid covering the Convex Hull of set C. Therefore finding a minimum volume ellipsoid can be cast as a Convex Problem. Finding Ellipsoidal Surface can be cast as a Feasibility SDP Problem. This lecture will give you an overview of how to combine the two algorithms and use for outlier detection and discrimination of two classes and will form the base for upcoming lectures. In the first part itself, I provided 2 algorithms which will be discussed later i.e. Least Squares and SVM Formulations.

Convex
Optimization
Minimum
Volume
Ellipsoids
Ellipsoidal
Surfaces

Added: 1709 days ago by
admin
Views: 1949
Comments: 0

( Not yet rated )

Hacking Into a system Using Fast-Track

5:41

By Prateek Gianchandani on Dec 15 ,2009 >> Fast-Track is a python based open-source project aimed at helping Penetration Testers in an effort to identify, exploit, and further penetrate a network. It was released By David Kennedy on Shmoocon 2009. In this video i demonstrate how one can easily compromise an unpatched system using Fast-track. Basically what it does is scan the system for open ports using Nmap, and then use Metasploit Autopwn to launch attacks against the system. We will be discussing all the concepts in Penetration Testing from scratch . So look out for more videos in the same channel.

Pentesting
hacking
Autopwning

Added: 1717 days ago by
admin
Views: 2183
Comments: 0

( Not yet rated )

Pentesting Part 1 - Using Metasploit to own a Box

4:34

By Prateek on 15th dec, In this video i demonstrate a very simple usage of the Metasploit Framework ,a tool for developing and executing exploit code against a remote target machine. The basic aim of this video is to demostrate how an unpatched windows Box could be compromised easily by a malicious hacker.
1. Choosing and configuring an exploit (code that enters a target system by taking advantage of one of its bugs; about 300 different exploits for Windows, Unix/Linux and Mac OS X systems are included);
2. Checking whether the intended target system is susceptible to the chosen exploit (optional);
3. Choosing and configuring a payload (code that will be executed on the target system upon successful entry, for instance a remote shell or a VNC server);
4. Choosing the encoding technique to encode the payload so that the Intrusion-prevention system will not catch the encoded payload;
5. Executing the exploit.

metasploit
hacking
exploitation

Added: 1716 days ago by
admin
Views: 1344
Comments: 0

( Not yet rated )

Reinforcement Learning: Value Functions and Markov Property

33:57

SANJEEV SHARMA: 5th Jan 2010: REINFORCEMENT LEARNING: Lecture-2: Value Functions and Markov Property. In this lecture I discussed about the Eposidic & Continual Tasks. I also discussed about the State, Rewards, Returns, Discounted Return and Agent Environment Interaction Process. I also provided the details about the Discounting Parameter and proved that the Expected Return is Finite through Discounting. Then I also discussed about the Kind of value functions i.e. state value function and action value function of a policy. I also derived the expression for State-Value Function for a policy and provided the interpretation of each term involved in the BELLMAN Equation. I also provided a very brief introduction to MARKOV PROPERTY, MARKOV STATES and MDPs. More details about the BELLMAN Equation and MDPs will be discussed in Lecture 3. Much of the terms like Bellman Optimality Equation and relation between State-Value and Action Value Function is skipped in this lecture as this will form the topic of discussion in lecture 3.

Reinforcement
Learning
Value
Functions
Bellman
Equation
Markov
Property

Added: 1692 days ago by
admin
Views: 2201
Comments: 0

( Not yet rated )

Reinforcement Learning: Iterative Algorithms & Single Agent Path Planning in Static Environment under FOMDPs

43:37

SANJEEV SHARMA : 18th Jan 2010: REINFORCEMENT LEARNING: Lecture - 3: ITERATIVE ALGORITHMS & SINGLE AGENT PATH PLANNING IN FOMDPs. (Fully observable MDPs).

CONTENTS:

Optimal Value Functions, Bellman Optimality Equation, Relation b/w Optimal Action value function and Optimal State-Value Function, Policy Evaluation, Policy Iteration, Value Iteration, Policy Improvement, Agent Path Planning in Static Environment in FOMDPs.

DESCRIPTION:

In this lecture first of all I mentioned few things from the previous lecture. Then I provided an introduction to optimal policies, details about the relationship b/w the Optimal State Value Function and OPtimal Action Value Function. Then I mentioned the Bellman Optimality Equation for both the State-Value function and Action-Value Function. Then I provided a brief overview of my Example in Agent\'s Path Planning in Static Environment in Fully Observable MDPs. Then I provided the details of 4 most important algorithms i.e. Policy Evaluation, Policy Improvement, Policy Iteration and Value Iteration.

Date-23rd January 2011: The code I had written approximately an year ago for this lecture: Path_Planning_Policy_Evaluation_Sanjeev.zip. Just run Sanjeev_Main_Path.m .

Reinforcement
Learning
Policy
Evaluation
Policy
Improvement
Policy
Iteration
Value
Iteration
Bellman
Optimality
Equation
Single
Agent
Path
Planning
FOMDPs
Static
Environment

Added: 1682 days ago by
admin
Views: 1753
Comments: 0

( Not yet rated )

Machine Learning: Kullback-Leibler Divergence & Convex Analysis

37:49

SANJEEV SHARMA: 11th Jan 2010: MACHINE LEARNING: Lecture-10: Kullback-Leibler Divergence & Convex Analysis.

Contents: Convex Analysis, Perspective Functions, EPIGRAPH, Information Gain, Entropy, Discrete Entropy, Differential Entropy, Noiseless Coding Theory, Relation b/w entropy and information, Kullback-Leibler Divergence.

Description: In this lecture first I discussed about the Convex Analysis of the Kullback-Leibler Divergence b/w the two positive vectors. Kullback-Leibler Divergence b/w two Positive Vectors is sum of Relavite Entropy and Linear function of vectors where as when these vectors represent the probability then KL Divergence is just the relative entropy between the two distributions. So the first part of the lecture covers the Convex Interpretation, EPIGRAPHS, Perspective Functions & Convex Analysis of KL Divergence. I discussed each term briefly in the lecture. Then in the 2nd part of the lecture I discussed about the Information Gain Theroy and Machine Learning. First I discussed about the Information associated with observing state of particular discrete random variable and then the Entropy associated with the distribution. I discussed about the maximum entropy in case of discrete random variable, i.e.the uniform distributions, then gave the relation b/w entropy and information. Once this is done I discussed about the DIFFERENTIAL ENTROPY for Continuous Random variables. Then I provided the Mathematical Expression of Kullback-Leibler Divergence.

Machine
Learning
Kullback
Leibler
Divergence
Perspective
Function
Epigraph
Information
Gain
Entropy
Differemtial
Entropy
Discrete
Entropy
Relative
Entropy

Added: 1689 days ago by
admin
Views: 2896
Comments: 0

( Not yet rated )

Reinforcement Learning: Least-Squares Temporal Difference Learning

17:21

SANJEEV SHARMA : 24th Oct 2010: REINFORCEMENT LEARNING: Phase-II, Presentation-1 (P2P1): Least-Squares Temporal Difference Learning.

CONTENTS:
Value Function, Value Function Approximation, Linear Function Approximation, TD Learning, LSTD algorithm.

DESCRIPTION:
LSTD Algorithm is a modification over the TD Learning methods. Though both of them are same in spirit for they solve for the fixed-point methods, they differ in the way they approach the solution. TD is an incremental and online algorithm whereas the LSTD is an offline and batch learning algorithm. TD solves for the expected TD-error=0 whereas LSTD directly computes the weight vector for which the expected TD-Update is = 0. In this lecture I have discussed the LSTD algorithm.

"
Least-Squares
Temporal
Difference
Learning",
LSTD,
"Reinforcement
Learning",
"Value
Function
Approximation",
"Linear
Function
Approximation"

Added: 1403 days ago by
admin
Views: 3385
Comments: 0

( Not yet rated )

Reinforcement Learning: Monte Carlo & Intro to Ellipsoidal Constrained Agent Navigation (Path Planning for UGV)

84:32

SANJEEV SHARMA : 4th March 2010: REINFORCEMENT LEARNING: Lecture - 4: MONTE CARLO & Introduction to Convex-Reinforcement-Path (Ellipsoidal Constrained Agent Navigation).

CONTENTS:
TALOS, Introduction to ELLIPSOIDAL METHODS for Path Planning, First & Every Visit Monte Carlo, Value function estimation, Problem of Infinite Episodes & Exploring Starts, Generalized Policy Iteration, Eliminating the assumption of infinite episodes, Monte Carlo ES, e-soft & e-greedy policies, Eliminating Exploring Starts, On-Policy & Off-Policy Monte Carlo, Estimating one policy while following another.

DESCRIPTION: In this video first of all I provided a brief introduction to ECAN, a path planning algorithm. Then I explained the need of Monte Carlo Methods (or estimation algorithms). Then I explained First & Every Visit Monte Carlo & then I explained Value Function estimation using Monte Carlo. Then I explained the problem of infinite episodes in evaluation step of the Generalized Policy Iteration (GPI), and then the concept behind Value Iteration algorithm for eliminating the need of infinite episodes in the evaluation step of the GPI. Then I discussed the Monte Carlo ES algorithm for eliminating the need of infinite episodes by using exploring starts. Then I discussed On-Policy Monte Carlo, eliminating both the assumptions. Then discussed an algorithm for use in Off-Policy Monte Carlo where one policy can be estimated while generating episodes from the other and requirement for this is independent of environment's dynamics & finally discussed Off-Policy Monte Carlo.

Reinforcement
Learning
Monte
Carlo
Ellipsoidal
Constrained
Agent
Navigation
Path
Planning
On-Policy
Monte
Carlo
Off-Policy
Monte
Carlo
e-soft
e-greedy
Exploring
Starts
GPI

Added: 1637 days ago by
admin
Views: 2087
Comments: 0

( Not yet rated )

General Mathematical Optimization

23:31

Sanjeev Sharma: 30th Nov 2010. CCO-10/11: P-001, Section-1: General Mathematical Optimization

Contents: Linear Programs, Least-Squares, Convex Programs, Nonlinear Programs and Relaxations.

Mathematical optimization problem involves minimizing (or maximizing) a mathematical
function with respect to an optimization variable with some constraints on the optimization variable.
There are several classes of optimization problems, for example Linear Programming, Least-Squares,
Convex Programming and General Nonlinear programming problems. Linear programs and Least-Squares problems can be solved reliably and efficiently.
Convex Programming problems involves Linear Programs and Least-Squares problems as a special case. Convex Programs can be solved efficiently by methods
such as interior point methods, subgradient, cutting plane and ellipsoid methods. Solving a general Nonlinear Programming (not linear and also not known
to be convex) problem is a challenging task as no specific approach is known. Therefore a general way to solve these problems is to find the local optimal
solution. Convex Optimization plays role in Nonlinear Programming Problems through finding the lower bounds. Several relaxation methods involve replacing
the nonlinear constraints with the loose convex constraints to find the lower bound on the NLP problems.

Optimization

Added: 1367 days ago by
admin
Views: 2561
Comments: 0

( Not yet rated )

Support Vector Machine & In-depth Convex Analysis

78:15

Sanjeev Sharma
23rd Nov 2010: Machine Learning: Lecture-12: MLR: Contents: Primal Hard Margin & Soft-Margin, Dual Hard Margin & Soft Margin, KKT-Conditions,
Lower Bounds, Lagrange Dual, Slater's Constraint Qualification, Weak & Strong Duality,
Complementary Slackness.

Description: SVM can be used both as a hard & soft margin classifier. Hard Margin is used in the cases where the dataset is separable and Soft-Margin is used for classifying the overlapping classes. The power of SVM comes from the DUAL formulation which utilizes the Kernel-Trick thus facilitating the basis construction. The sparsity of SVM comes from the Complementary Slackness which is one of the KKT-Conditions. The Slater's Constraint Qualification Condition is satisfied for the SVM since all the
constraints are affine and the objective function is convex. Hence Strong-Duality holds in the SVM.
Thus solving primal is equivalent to solving the Dual problem, since the optimal-duality gap is zero.
Lagrangian Dual explains the relationship between the Primal & Dual Problem. This lecture addresses
the complete Convex Analysis of the SVM, deriving the KKT-Conditions and explaining the Lagrange
Dual and Strong Duality. It also explains the Interpretation of Lagrange Multiplier and its importance.

SVM,
Lagrange
Dual,
KKT-Conditions,
Weak
&
Strong
Duality,
Slater's
Constraint
Qualification,
Complementary
Slackness.

Added: 1373 days ago by
admin
Views: 3950
Comments: 1

( Not yet rated )

Reinforcement Learning: Kernelized Value Function Approximation

29:34

SANJEEV SHARMA 8th Nov 2010: Reinforcement Learning: Phase-II, Presentation-4: ARL-10/11 - (Lecture-4). Kernelized Value Function Approximation.

CONTENTS: Kernel Methods, Kernel Based Regularized Least-Squares Regression, Kernelized Value Function Approximation.

DESCRIPTION: Kernel methods have been much utilized in machine learning problems. Kernel trick facilitates the classification problem by circumventing the need to construct the higher order basis function. A Kernel maps the feature space to a higher dimensional space, enabling the separation of the data set in the higher dimensional space. Reinforcement Learning methods such as GPRL and KLSTD had utilized the kernel trick. However, the algorithm, Kernelized Value Function Approximation, unifies these methods and provides a model based solution for approximating the state-value function. The algorithm can be used for prediction problems. It utilizes the Kernel based regularized least-squares regression approach to find the relation between the states and the corresponding expected, γ-discounted toal reward. It also utilizes this model to find the kernel for the next state. The value function finally results into the sum of a Geometric Progression (GP) involving the KERNEL matrices and provides the analytical solution for the approximation problem.

Kernel
Methods,
Kernel
Reinforcement
Learning,
Kernelized
Value
Function
Approximation

Added: 1388 days ago by
admin
Views: 1796
Comments: 0

( Not yet rated )

Machine Learning: Logistic Regression.

25:38

Sanjeev Sharma>This is 3rd Lec in ML. In this lecture I present Logistic Regression which is a class of exponential family function. It is often used in classification. In this lecture I explain the need of Logistics. In later part of the lecture I will cover the Gradient Ascent and Newton\'s Method for finding the maximum log likelihood. In general, Newton\'s Method works quite well for Machine Learning Logistic Reg. therefore the case where Hessian is ill conditioned is not covered and is a topic of Numerical Optimization and will be covered in Optimization Channel when I discuss the INEXACT Newton\'s Method where Approximate Hessian rather than Hessian is used for updating parameters.

Machine
Learning
Logistic
Regression
Sigmoid
Gradient
Descent
Gradient
Ascent
Newton
Inexact
Newton
Method

Added: 1791 days ago by
admin
Views: 11657
Comments: 0

( Not yet rated )

Unconstrained Minimization: Backtracking Line Search & Gradient Descent

30:15

SANJEEV SHARMA: 3rd Dec 2010. CCO-10/11: P-002, Section-2: Unconstrained Minimization: Backtracking Line Search & Gradient Descent.

Contents: Exact Line Search; Inexact Line Search; Backtracking Line Search; Gradient Descent.

Methods for solving the unconstrained minimization problems include Descent Algorithms. These are iterative solvers which iterates between the two steps to
find the solution. Step-1 invloves finding a search direction (δx) and step-2 involves finding a step-length *t* to move in the direction
δx. The step-length if found using the line search methods. There are two kinds of line search algorithms, the exact & inexact line search. Exact line
search finds the step lenght for which the function f(x+*t*δx) is minimized i.e. *t*=argmin_{(s>0)}f(x+sδx). The inexact line search
just finds the step length such that the function f(x+*t*δx) is approximately minimized. The most popular one is the backtracking line search algorithm
which depends on two constants, α & β . Gradient descent is an algorithm which uses the δx=-∇f(x), i.e. the direction is the direction of
maximum decrease of f(x). It uses the line search (exact or backtracking) to find the step-length to move in this direction. The performance of the algorithm
is dependent on the sublevel sets of f(x) near optimum. This presentation shows how to practically apply and solve the Backtracking Line Search Algorithm.

Backtracking
Line
Search,
Gradient
Descent,
Exact
Line
Search

Added: 1364 days ago by
admin
Views: 5372
Comments: 0

( Not yet rated )

Support Vector Machine & In-depth Convex Analysis

78:15

Sanjeev Sharma
23rd Nov 2010: Machine Learning: Lecture-12: MLR: Contents: Primal Hard Margin & Soft-Margin, Dual Hard Margin & Soft Margin, KKT-Conditions,
Lower Bounds, Lagrange Dual, Slater's Constraint Qualification, Weak & Strong Duality,
Complementary Slackness.

Description: SVM can be used both as a hard & soft margin classifier. Hard Margin is used in the cases where the dataset is separable and Soft-Margin is used for classifying the overlapping classes. The power of SVM comes from the DUAL formulation which utilizes the Kernel-Trick thus facilitating the basis construction. The sparsity of SVM comes from the Complementary Slackness which is one of the KKT-Conditions. The Slater's Constraint Qualification Condition is satisfied for the SVM since all the
constraints are affine and the objective function is convex. Hence Strong-Duality holds in the SVM.
Thus solving primal is equivalent to solving the Dual problem, since the optimal-duality gap is zero.
Lagrangian Dual explains the relationship between the Primal & Dual Problem. This lecture addresses
the complete Convex Analysis of the SVM, deriving the KKT-Conditions and explaining the Lagrange
Dual and Strong Duality. It also explains the Interpretation of Lagrange Multiplier and its importance.

SVM,
Lagrange
Dual,
KKT-Conditions,
Weak
&
Strong
Duality,
Slater's
Constraint
Qualification,
Complementary
Slackness.

Added: 1373 days ago by
admin
Views: 3950
Comments: 1

( Not yet rated )

Unconstrained Minimization: Convergence Analysis of Gradient Descent Using Line Search

21:54

SANJEEV SHARMA: 12th Dec 2010. CCO-10/11: P-003, Section-2: Unconstrained Minimization: Convergence Analysis & Condition Number Dependence - Gradient Descent.

Contents: Backtracking Line Search, Exact Line Search, Condition Number, Gradient Descent.

Gradient Descent is an algorithm for solving unconstrained minimization problems. This is an iterative algorithm, where in each iteration it finds a direction,
which is a descent direction, and a step-length to move in the descent direction. Step-1 invloves finding a search direction (δx) where δx=-∇f(x) for the Gradient Descent
and step-2 involves finding a step-length t to move in the direction δx. Exact line-search and backtracking line-search may be used to find the step
length. The rate of convergence for the Gradient Descent method depends on the eccentricity of the sublevel sets, which is the condition number of the Hessian of the function at the optimum i.e.
ω(∇²f(x)). This presentation discusses the convergence analysis and condition number dependence of the ROC, for the Gradient Descent Methods with Line-Search methods.

Convergence
Analysis,
Condition
Number,
Gradient
Descent,
Unconstrained
Minimization

Added: 1355 days ago by
admin
Views: 3931
Comments: 0

( Not yet rated )

Hand/Finger Detection Using Image Processing

12:49

Sanjeev Sharma>>This was my own project. In this video, I demonstrate a very simple and easy to apply algorithm for hand/finger detection using basic tools/methods of image processing. I will also demonstrate the results obtained after each step of the algorithm.

Image
Processing
Hand
detection
morphology

Added: 1806 days ago by
admin
Views: 3591
Comments: 0

(1 ratings)

Reinforcement Learning: Least-Squares Temporal Difference Learning

17:21

SANJEEV SHARMA : 24th Oct 2010: REINFORCEMENT LEARNING: Phase-II, Presentation-1 (P2P1): Least-Squares Temporal Difference Learning.

CONTENTS:
Value Function, Value Function Approximation, Linear Function Approximation, TD Learning, LSTD algorithm.

DESCRIPTION:
LSTD Algorithm is a modification over the TD Learning methods. Though both of them are same in spirit for they solve for the fixed-point methods, they differ in the way they approach the solution. TD is an incremental and online algorithm whereas the LSTD is an offline and batch learning algorithm. TD solves for the expected TD-error=0 whereas LSTD directly computes the weight vector for which the expected TD-Update is = 0. In this lecture I have discussed the LSTD algorithm.

"
Least-Squares
Temporal
Difference
Learning",
LSTD,
"Reinforcement
Learning",
"Value
Function
Approximation",
"Linear
Function
Approximation"

Added: 1403 days ago by
admin
Views: 3385
Comments: 0

( Not yet rated )

Machine Learning: Perceptrons- Kernel Perceptron Learning

33:43

SANJEEV SHARMA
12th Nov 2010: Machine Learning: Lecture-11: Kernel Perceptron Learning.

CONTENTS: Simple Perceptron Algorithm Voted Perceptron Algorithm, Kernel Perceptron Algorithm.

DESCRIPTION: To solve a machine learning problem like classification and regression requires constructing basis functions. In general it's quite hard to determine what kind of basis functions will be able to perform well in the task at hand. Sometimes we may find that the polynomial function can perform well, but what should be the degree of the polynomial? Using kernels circumvent this problem. The cardinal advantage of using Kernels is that it obviates the necessity of constructing the basis functions explicitly. In this Lecture I address this issue and explain the simple Perceptron learning algorithm with linear basis functions and then voted version of the Perceptron learning algorithm, again with the linear basis functions. The voted version assigns weight to each of the weight vector that it encounters during the learning phase and then outputs the final weight vector that is the voted-sum of the weight vectors. However Perceptron can solve nonlinear problems by constructing the non-linear basis functions. But using the KERNEL PERCEPTRON algorithm obviates the need to construct the basis functions.

Kernel
Machines
Kernel
Trick
Perceptron
Kernel
Perceptron
Voted
Perceptron

Added: 1384 days ago by
admin
Views: 3316
Comments: 0

( Not yet rated )

Path Planning: Ellipsoidal Constrained Agent Navigation - Autonomous Navigation of UAVs & UGVs in Unknown Environments

50:10

Sanjeev Sharma : 11th March 2010: PATH PLANNING: PRESENTATION - 2, CRP - PRESENTATION - 1: Ellipsoidal Constrained Agent Navigation, Class: Convex-Reinforcement-Path,
Area: Path Planning for Autonomous Navigation of UGV in Urban Environment.

Project Home Page

DESCRIPTION:

This presentation was delivered in March 2010, with title "Path Planning: Ellipsoidal Constrained Agent Navigation - Autonomous Navigation of UGV in Urban Environments", when ECAN algorithm was in development phase. Later this algorithm was modified for UAV and UGV navigation in unseen environments. The presentation shows only point size robot (ground) navigation. For better understanding download the publication. The algorithm uses Convex-Quadratic Constrained Quadratic Program (QCQP), Semi-Definite Programming (SDP) and Second-Order Cone Programming (SOCP). SOCP handles finite and non-convex shaped robots.

Sanjeev Sharma, *QCQP-Tunneling: Ellipsoidal Constrained Agent Navigation*. In Proceedings of Second IASTED International Conference on Robotics, Nov 7-9, 2011, Pittsburgh, USA.

QCQP,
SDP,
SOCP,
Autonomous
Navigation,
Path
Planning,
Convex
Programming,
Continuous
Environments

Added: 1630 days ago by
admin
Views: 3205
Comments: 0

( Not yet rated )

Machine Learning: Kullback-Leibler Divergence & Convex Analysis

37:49

SANJEEV SHARMA: 11th Jan 2010: MACHINE LEARNING: Lecture-10: Kullback-Leibler Divergence & Convex Analysis.

Contents: Convex Analysis, Perspective Functions, EPIGRAPH, Information Gain, Entropy, Discrete Entropy, Differential Entropy, Noiseless Coding Theory, Relation b/w entropy and information, Kullback-Leibler Divergence.

Description: In this lecture first I discussed about the Convex Analysis of the Kullback-Leibler Divergence b/w the two positive vectors. Kullback-Leibler Divergence b/w two Positive Vectors is sum of Relavite Entropy and Linear function of vectors where as when these vectors represent the probability then KL Divergence is just the relative entropy between the two distributions. So the first part of the lecture covers the Convex Interpretation, EPIGRAPHS, Perspective Functions & Convex Analysis of KL Divergence. I discussed each term briefly in the lecture. Then in the 2nd part of the lecture I discussed about the Information Gain Theroy and Machine Learning. First I discussed about the Information associated with observing state of particular discrete random variable and then the Entropy associated with the distribution. I discussed about the maximum entropy in case of discrete random variable, i.e.the uniform distributions, then gave the relation b/w entropy and information. Once this is done I discussed about the DIFFERENTIAL ENTROPY for Continuous Random variables. Then I provided the Mathematical Expression of Kullback-Leibler Divergence.

Machine
Learning
Kullback
Leibler
Divergence
Perspective
Function
Epigraph
Information
Gain
Entropy
Differemtial
Entropy
Discrete
Entropy
Relative
Entropy

Added: 1689 days ago by
admin
Views: 2896
Comments: 0

( Not yet rated )

Machine Learning: Gaussian Discriminant Analysis

9:15

SANJEEV SHARMA>> Lecture 6 - Machine Learning.17th October. Gaussian Discriminant Analysis is a class of broader family of algorithms known as Generative Learning Algorithms. In this video I will show the Bernoulli Distribution of target variables and the Multivariate Gaussian Distribution of feature vectors. Then I provided joint-log likelihood overview and then demonstrated the results of argmax over parameters of the JLL. Trade off b/w logistic and GDA will be discussed separately in the lecture 9

Generative
Learning
Algorithms
Gaussian
Discriminant
Analysis
Multivariate
Gaussian
Bernoulli
Joint
Log
Likelihood

Added: 1775 days ago by
admin
Views: 2859
Comments: 0

( Not yet rated )

Unconstrained Minimization: Steepest Descent Methods & Convergence Analysis

44:46

SANJEEV SHARMA: 7th Jan 2011. CCO-10/11: P-004, Section-2: Unconstrained Minimization: Steepest Descent Methods and Convergence Analysis.

Contents: Steepest Descent, Coordinate Descent, Newton's Method, Convergence Analysis.

Steepest Descent is one of the algorithms for solving the Unconstrained Minimization problems. It is an iterative algorithm,
where in each iteration it finds a direction, which is a steepest descent direction, with lenth of the descent vector constrained
by some valid norm (||.||). Different norms, for constraining the length of the descent direction, results in different descent
algorithms. The *L _{2}-Norm* results in Gradient Descent Algorithm, the Quadratic Norm (||z||

Steepest Descent Gradient Descent Coordinate Descent Convergence Analysis

Added: 1329 days ago by admin Views: 2622 Comments: 1

( Not yet rated )

General Mathematical Optimization

23:31

Sanjeev Sharma: 30th Nov 2010. CCO-10/11: P-001, Section-1: General Mathematical Optimization

Contents: Linear Programs, Least-Squares, Convex Programs, Nonlinear Programs and Relaxations.

Mathematical optimization problem involves minimizing (or maximizing) a mathematical
function with respect to an optimization variable with some constraints on the optimization variable.
There are several classes of optimization problems, for example Linear Programming, Least-Squares,
Convex Programming and General Nonlinear programming problems. Linear programs and Least-Squares problems can be solved reliably and efficiently.
Convex Programming problems involves Linear Programs and Least-Squares problems as a special case. Convex Programs can be solved efficiently by methods
such as interior point methods, subgradient, cutting plane and ellipsoid methods. Solving a general Nonlinear Programming (not linear and also not known
to be convex) problem is a challenging task as no specific approach is known. Therefore a general way to solve these problems is to find the local optimal
solution. Convex Optimization plays role in Nonlinear Programming Problems through finding the lower bounds. Several relaxation methods involve replacing
the nonlinear constraints with the loose convex constraints to find the lower bound on the NLP problems.

Optimization

Added: 1367 days ago by
admin
Views: 2561
Comments: 0

( Not yet rated )

Machine Learning: Locally Weighted Regression

7:37

Sanjeev Sharma> This is 2nd lecture in ML. Locally weighted regression is classified as Non-Parametric Algorithm. In machine learning & Optimization, locally weighted regression (LWR) is used to locally fit a curve to the data set. A weighting function is used to tell the algorithm, which instances to focus upon.

Machine
Learning
Optimization
Locally
weighted
regression
least
squares
fitting

Added: 1792 days ago by
admin
Views: 2558
Comments: 0

( Not yet rated )

Unconstrained Minimization: Theoretical Anlysis- Stopping Criterion & Condition Number

25:7

SANJEEV SHARMA: 2nd Dec 2010. CCO-10/11: P-001, Section-2: Unconstrained Minimization: Theoretical Analysis of Stopping Criterion & Condition Number.

Contents: Condition Number, Stopping Criterion, Strong Convexity.

Unconstrained minimization, as the name suggests is minimizing a function {f(x)} without any constraints. The only constraints are the implicit constraints on the
domain of the function to be minimized. These problems are solved through methods such as Descent Methods, Steepest Descent, Newtons Algorithm, Interior Point Methods etc. The line search algorithms involve an iterative solver. Therefore we need certain criterion for stopping the algorithm and also we need to predict the performance of the algorithm. These iterative solvers depend on the Condition Number of the Hessian of the objective (ω(∇²f(x))) near the optimal point and
the rate of convergence depends on the eccentricity of the condition number. Moreover, these algorithms use the norm of the gradient of the objective function
as a stopping criterion. This presentation provides a complete analysis of the derivation of the stopping criterion and discusses the condition number of
α-sublevel sets and relation between the ω(∇²f(x)).

Unconstrained
Minimization
Condition
Number
Convergence
Line
Search

Added: 1365 days ago by
admin
Views: 2545
Comments: 0

( Not yet rated )

Machine Learning: Linear Regression.

19:39

Sanjeev Sharma. This is 1st lecture in Machine Learning Channel. The lecture is about linear regression, explaining the way to fit a line to the data, using least squares error function. (The tackling of problem with outliers is not discussed in this, will be discussed in upcoming lectures and especially in Optimization)

Machine
Learning
Linear
Regression
Least
Squares

Added: 1798 days ago by
admin
Views: 2415
Comments: 1

(2 ratings)

Google's Self-Driving Car: Sebastian Thurn Talk

53:10

Sebastian Thrun, on October 29th, 2012, gave a talk on Autonomous Driving.

Autonomous
Driving
Sebastian
Thrun
Google

Added: 661 days ago by
admin
Views: 2353
Comments: 0

( Not yet rated )

Reinforcement Learning: Fixed-Point Estimate of State-Action Value Function & Least-Squares Policy Iteration

42:33

SANJEEV SHARMA : 2nd Nov 2010: Reinforcement Learning: Phase-II, Presentation-2: ARL-10/11 - (Lecture-2): Fixed-Point Estimation of State-Action Value Function & Least-Squares Policy Iteration.

CONTENTS: Fixed-Point Estimation of State-Action Value Function, Lease-Squares Policy Iteration (LSPI).

DESCRIPTION: LSPI is an off-policy algorithm. It's a modification over the LSTD. LSPI has an advantage over the LSTD algorithm, for it doesn't require the new samples to be collected for the computation of value function for each new policy. Being an off-policy algorithm, it can accept samples from any random policy. This provides the data-efficiency to the LSPI algorithm which was not possessed LSTD Algorithm. LSPI, however uses an algorithm, LSQ (LSTDQ) to compute the value function for a policy, which is actually the off-policy innovation in the LSPI algorithm. LSPI returns the Fixed-Point Solution which is discussed in detail in the lecture.

LSPI,
Fixed-Point
Solution,
Bellman
Operator,
Acrobot,
Chain-Walk-Domain

Added: 1395 days ago by
admin
Views: 2311
Comments: 0

( Not yet rated )

MACHINE LEARNING:Agglomerative Hierarchical Clustering-BIC

13:26

SANJEEV SHARMA: 25th December 2009. Lecture-9: Agglomerative Hierarchical Clustering using Bayesian Information Criterion. Bayesian Information Criterion or BIC in short is a Hypothesis Testing Algorithm just like the Kullback Leibler Divergence. BIC is a Parametric Model and assumes the Gaussian Distribution over the Dataset. In Agglomerative Hierarchical Clustering we do bottom-up clustering. In this lecture I mentioned how to use BIC Score for Cluster Set as a decision criteria for merging the clusters. At each level we evaluate the BIC Score and then we test for merging clusters and again calculate the BIC Score at next higher level. If the difference b/w higher level and lower level is greater than zero then the two clusters can be merged. We follow this procedure to merge clusters and can also use it to find most appealing clusters that can be merged.

Hierarchical
Clustering
Bayesian
Information
Criterion
BIC

Added: 1706 days ago by
admin
Views: 2283
Comments: 0

( Not yet rated )

Reinforcement Learning: Value Functions and Markov Property

33:57

SANJEEV SHARMA: 5th Jan 2010: REINFORCEMENT LEARNING: Lecture-2: Value Functions and Markov Property. In this lecture I discussed about the Eposidic & Continual Tasks. I also discussed about the State, Rewards, Returns, Discounted Return and Agent Environment Interaction Process. I also provided the details about the Discounting Parameter and proved that the Expected Return is Finite through Discounting. Then I also discussed about the Kind of value functions i.e. state value function and action value function of a policy. I also derived the expression for State-Value Function for a policy and provided the interpretation of each term involved in the BELLMAN Equation. I also provided a very brief introduction to MARKOV PROPERTY, MARKOV STATES and MDPs. More details about the BELLMAN Equation and MDPs will be discussed in Lecture 3. Much of the terms like Bellman Optimality Equation and relation between State-Value and Action Value Function is skipped in this lecture as this will form the topic of discussion in lecture 3.

Reinforcement
Learning
Value
Functions
Bellman
Equation
Markov
Property

Added: 1692 days ago by
admin
Views: 2201
Comments: 0

( Not yet rated )

Reinforcement Learning: Geometric Analysis of BRM & Fixed-Point methods

22:51

SANJEEV SHARMA 5th Nov 2010: Reinforcement Learning: Phase-II, Presentation-3: ARL-10/11 - (Lecture-3): Geometric Analysis of Bellman Residual Minimization & Fixed-Point Methods.

CONTENTS: Bellman Residual Minimization, MDP, Fixed-Point Methods.

DESCRIPTION: Target of a control problem is to find an optimal control policy for a given task (domain/MDP).
Algorithm that finds this optimal policy is the Policy Iteration. PI is analogous to the EM-Algorithm.
It's first step, value determination step, involves computing the State-Action value function
for a given policy. The next step is the policy improvement step, which makes the next policy greedy
with respect to the value function of the previous policy. The value determination step, if using least-squares algorithms, can be solved either by Bellman Residual Minimization or the Fixed-Point methods. This lecture discusses the BRM and FP-methods and provides a geometrical interpretation of both these methods.
BRM & FP eventually minimize the hypotenuse & base of a right-Δ respectively.

Geometric-Analysis
BRM
Bellman
Residual
Minimization
Fixed-Point
Methods

Added: 1392 days ago by
admin
Views: 2195
Comments: 0

( Not yet rated )

Hacking Into a system Using Fast-Track

5:41

By Prateek Gianchandani on Dec 15 ,2009 >> Fast-Track is a python based open-source project aimed at helping Penetration Testers in an effort to identify, exploit, and further penetrate a network. It was released By David Kennedy on Shmoocon 2009. In this video i demonstrate how one can easily compromise an unpatched system using Fast-track. Basically what it does is scan the system for open ports using Nmap, and then use Metasploit Autopwn to launch attacks against the system. We will be discussing all the concepts in Penetration Testing from scratch . So look out for more videos in the same channel.

Pentesting
hacking
Autopwning

Added: 1717 days ago by
admin
Views: 2183
Comments: 0

( Not yet rated )

Lecture 1: Introduction to Mathematical Optimization Part 1

7:43

By Sanjeev >>> 10th Aug 2009 ......This is part 1 of the series of videos on Mathematical optimization . In this video I give an overview of general Mathematical Optimization Problems and Necessary condition for minimizer

optimization
line
search
trust
region
methods

Added: 1843 days ago by
admin
Views: 2095
Comments: 0

( Not yet rated )

Reinforcement Learning: Monte Carlo & Intro to Ellipsoidal Constrained Agent Navigation (Path Planning for UGV)

84:32

SANJEEV SHARMA : 4th March 2010: REINFORCEMENT LEARNING: Lecture - 4: MONTE CARLO & Introduction to Convex-Reinforcement-Path (Ellipsoidal Constrained Agent Navigation).

CONTENTS:
TALOS, Introduction to ELLIPSOIDAL METHODS for Path Planning, First & Every Visit Monte Carlo, Value function estimation, Problem of Infinite Episodes & Exploring Starts, Generalized Policy Iteration, Eliminating the assumption of infinite episodes, Monte Carlo ES, e-soft & e-greedy policies, Eliminating Exploring Starts, On-Policy & Off-Policy Monte Carlo, Estimating one policy while following another.

DESCRIPTION: In this video first of all I provided a brief introduction to ECAN, a path planning algorithm. Then I explained the need of Monte Carlo Methods (or estimation algorithms). Then I explained First & Every Visit Monte Carlo & then I explained Value Function estimation using Monte Carlo. Then I explained the problem of infinite episodes in evaluation step of the Generalized Policy Iteration (GPI), and then the concept behind Value Iteration algorithm for eliminating the need of infinite episodes in the evaluation step of the GPI. Then I discussed the Monte Carlo ES algorithm for eliminating the need of infinite episodes by using exploring starts. Then I discussed On-Policy Monte Carlo, eliminating both the assumptions. Then discussed an algorithm for use in Off-Policy Monte Carlo where one policy can be estimated while generating episodes from the other and requirement for this is independent of environment's dynamics & finally discussed Off-Policy Monte Carlo.

Reinforcement
Learning
Monte
Carlo
Ellipsoidal
Constrained
Agent
Navigation
Path
Planning
On-Policy
Monte
Carlo
Off-Policy
Monte
Carlo
e-soft
e-greedy
Exploring
Starts
GPI

Added: 1637 days ago by
admin
Views: 2087
Comments: 0

( Not yet rated )

Reinforcement Learning: Temporal Difference Learning

76:21

SANJEEV SHARMA : 21st March 2010: REINFORCEMENT LEARNING: Lecture - 5: TEMPORAL DIFFERENCE LEARNING.

CONTENTS:

Constant-alpha Monte Carlo; 1-Step Temporal Differencing; TD(0) for Prediction; Estimating Value Function using TD(0); SARSA On-Policy TD Control; Q-Learning Off-Policy TD Control; Actor-Critic Methods; R-Learning; Backup Diagrams for SARSA, Q-Learning, Monte Carlo and 1-Step TD(0).

DESCRIPTION:

In this lecture first of all I provided a very brief introduction to the Temporal Differencing {TD(0) or 1-Step TD(0)} Methods. Then I provided an overview of Constant-Alpha Monte Carlo & similarity with the Temporal Difference Learning. Then I discussed the prediction problem i.e. estimating the State-Value function using the 1-Step TD(0). Then I provided the Backup diagrams for Monte Carlo and TD(0) methods. Then I discussed SARSA which is an On-Policy TD Control Algorithm and then Q-Learning: Off-Policy TD Control Algorithm. Then I discussed in detail, why SARSA is On-policy and Q-Learning is an Off-Policy, using first the Backup Diagram and then through the Pseudo Code for both SARSA and Q-Learning. Then I discussed the Actor-Critic Methods. Finally I discussed the R-Learning Algorithm and then terminated. [:)]

PATH PLANNING:

In 2nd Presentation in PATH PLANNING, I provided a brief idea of what kind of Reinforcement Learning algorithms will help in the Ellipsoidal Constrained Agent Navigation. TD methods are appropriate for PATH PLANNING due to online and incremental learning ability. This lecture is again a supplementary material for Path Planning & Autonomous Navigation.

Reinforcement
Learning
Temporal
Difference
Learning
Monte
Carlo
SARSA
Q-Learning
R-Learning
Actor-Critic
Learning
1-Step
TD(0)

Added: 1620 days ago by
admin
Views: 2084
Comments: 0

(1 ratings)

Machine Learning:Exponential Family Distribution & Suff Statistics

19:38

Sanjeev Sharma> This is 5th Lecture in ML. Exponential Family Distribution and Sufficient Statistics. In this lecture I will cover the general distribution family i.e. exponential. Almost every distribution can be represented in the form of Exponential Family Distribution. In the first part, I will show you the step wise derivation of SUFFICIENT STATISTICS, then I will show you how to represent GAUSSIAN and BERNOULLI Distributions in the form of exponential family distribution.

Machine
Learning
Exponential
Family
Distribution
Sufficient
Statistics
Gaussian
Bernoulli
Distribution

Added: 1789 days ago by
admin
Views: 2057
Comments: 0

( Not yet rated )

Putting Wireless Interface Into monitor mode

3:12

By Prateek on 12th Oct,09
Putting your wireless card into monitor mode allows you to monitor the traffic without associating to any access point. This is different from promiscous mode which is for both wired and wireless networks. In this video i show you how to put your wireless card into monitor mode.

Monitor
mode
interface
wireless

Added: 1781 days ago by
admin
Views: 1958
Comments: 0

( Not yet rated )

Deauthentication attacks

3:31

Prateek Gianchandani> Deauthentication attacks are basically used to disconnect a client from an access point .These kinds of attacks will be useful while cracking Wpa because it allows an attacker to capture the 4 way handshake that occurs during reassociation.

deauthentication
aireplay

Added: 1806 days ago by
admin
Views: 1955
Comments: 0

( Not yet rated )

Path Planning: Ellipsoidal Surfaces & MVE.

14:20

SANJEEV SHARMA: 23rd Novomber 2009. Presentation-1: Ellipsoidal Constrained Agent Navigation

Contents: Least Squares, SVM, Quadratic Constraints, Ellipsoids, Quadratic Discriminating Surface, Lowner-John Ellipsoid, Ellipsoidal Constraints, Semi-Definite Programming, Outlier Rejection.

Description: This presentation is not directly on path planning but it is intended to give you an overview of two methods known as Ellipsoidal Surfaces and Lowner-John (Minimum Volume) Ellipsoids. Ellipsoidal surface is designed by setting constraint on your P matrix and quadratic expression. Minimum Volume Ellipsoid of a set C, is the ellipsoid covering the Convex Hull of set C. Therefore finding a minimum volume ellipsoid can be cast as a Convex Problem. Finding Ellipsoidal Surface can be cast as a Feasibility SDP Problem. This lecture will give you an overview of how to combine the two algorithms and use for outlier detection and discrimination of two classes and will form the base for upcoming lectures. In the first part itself, I provided 2 algorithms which will be discussed later i.e. Least Squares and SVM Formulations.

Convex
Optimization
Minimum
Volume
Ellipsoids
Ellipsoidal
Surfaces

Added: 1709 days ago by
admin
Views: 1949
Comments: 0

( Not yet rated )

Autonomous Waypoint Generation Strategy for On-Line Navigation in Unknown Environments

3:0

Sanjeev Sharma - August 05, 2012: This video demonstrates a framework for navigation and path planning in unknown 2D- and 3D-environments with limited field of view. It uses reinforcement learning to generate a waypoint in the robot's field of view. A path planner then generates a path to the waypoint. The process is iterated until the robot reaches the goal.

This video is attached to the paper in IROS Workshop on Robot Motion Planning: Online, Reactive, and in Real-Time, 2012. It demonstrates non-holonomic motion planning in unknown environment and other examples.

Sanjeev Sharma and Matthew E. Taylor, *Autonomous Waypoint Generation Strategy for On-Line Navigation in Unknown Environments*. In Proceedings of IROS Workshop on Robot Motion Planning: Online, Reactive, and in Real-Time, 2012.

Path
Planning,
Waypoint
Generation,
Navigation

Added: 752 days ago by
admin
Views: 1939
Comments: 0

( Not yet rated )

Nmap Part 1 (network mapping )

7:6

Prateek>In this video I show how Nmap can be used to map a network and display the various hosts that are up on a network. I also show some techniques which can be used to speed up the working of Nmap.

Nmap
port
scanning
hosts

Added: 1822 days ago by
admin
Views: 1875
Comments: 0

( Not yet rated )

Lecture 2- Line Search Methods Part 3

9:55

By Sanjeev>> Line Search is one of the ways of solving the problems of Unconstrained Optimization. In this lecture I will discuss about LS algorithm and then the Wolfe & Strong Wolfe conditions and their practical meaning through Matlab. Then I will prove the fact about step-length and derive necessary conditions for our function.

line
search
wolfe

Added: 1839 days ago by
admin
Views: 1870
Comments: 0

(1 ratings)

High Speed On-Line Motion Planning In Cluttered Environments

2:59

Sanjeev Sharma - July 21, 2012: This video demonstrates an online non-holononic motion planner for navigation in cluttered environments. The algorithm selects a sequence of intermediate goals online, through which the robot navigates. Experimentally the algorithm was tested on a differential drive robot.

(i) Z. Shiller and S. Sharma, *High Speed On-Line Motion Planning in Cluttered Environments*, IROS 2012; (ii)Z. Shiller and S. Sharma, *On-Line Obstacle Avoidance at High Speeds*, Romansy, 2012.

This video is attached to the accepted paper in IROS 2012.

Online
Motion
Planning,
Non-Holonimic
Constraints

Added: 767 days ago by
admin
Views: 1854
Comments: 0

( Not yet rated )

Nmap Part 2 (Port scanning)

8:21

Prateek>In this video we show one can use Nmap To find out what ports are open on a specific computer. We demonstrate different types of port scans and then compare the results to find out more detailed information.

Nmap
port
scanning
hosts

Added: 1822 days ago by
admin
Views: 1842
Comments: 0

( Not yet rated )

Machine Learning: Non Linear Regression

15:8

SANJEEV SHARMA: 3rd November 2009. Lecture 7 in Machine Learning. Non-Linear Regression. In this lecture I explained the non-linear basis function. Then I also explained the underlying concept of linear dependency on parameters. Then I derived the analytical solution and discussed the concept of under and over fitting. Then I provided the hint of L1 Norm Regularization.

Non-Linear
Regression
Machine
Learning

Added: 1759 days ago by
admin
Views: 1833
Comments: 0

( Not yet rated )

Lecture 2-Line Search Methods Part 1

8:51

By Sanjeev>> Line Search is one of the ways of solving the problems of Unconstrained Optimization. In this lecture I will discuss about LS algorithm and then the Wolfe & Strong Wolfe conditions and their practical meaning through Matlab. Then I will prove the fact about step-length and derive necessary conditions for our function.

Line
Search
Step
Length
optimization
wolfe
strong
wolfe

Added: 1839 days ago by
admin
Views: 1817
Comments: 0

( Not yet rated )

Lecture 1: Introduction to Mathematical Optimization Part 2

9:0

By Sanjeev >>> 10th Aug 2009 ......This is part 2 of the series of videos on Mathematical optimization . In this video I give an overview of general Mathematical Optimization Problems and Necessary condition for minimizer

optimization
line
search
trust
region
methods

Added: 1843 days ago by
admin
Views: 1801
Comments: 0

( Not yet rated )

DATA MINING Introduction

2:31

By Sanjeev>> This video demonstrates, what actually Data Mining is and why do we need to study it?

Data
Mining

Added: 1832 days ago by
admin
Views: 1801
Comments: 0

( Not yet rated )

Reinforcement Learning: Kernelized Value Function Approximation

29:34

SANJEEV SHARMA 8th Nov 2010: Reinforcement Learning: Phase-II, Presentation-4: ARL-10/11 - (Lecture-4). Kernelized Value Function Approximation.

CONTENTS: Kernel Methods, Kernel Based Regularized Least-Squares Regression, Kernelized Value Function Approximation.

DESCRIPTION: Kernel methods have been much utilized in machine learning problems. Kernel trick facilitates the classification problem by circumventing the need to construct the higher order basis function. A Kernel maps the feature space to a higher dimensional space, enabling the separation of the data set in the higher dimensional space. Reinforcement Learning methods such as GPRL and KLSTD had utilized the kernel trick. However, the algorithm, Kernelized Value Function Approximation, unifies these methods and provides a model based solution for approximating the state-value function. The algorithm can be used for prediction problems. It utilizes the Kernel based regularized least-squares regression approach to find the relation between the states and the corresponding expected, γ-discounted toal reward. It also utilizes this model to find the kernel for the next state. The value function finally results into the sum of a Geometric Progression (GP) involving the KERNEL matrices and provides the analytical solution for the approximation problem.

Kernel
Methods,
Kernel
Reinforcement
Learning,
Kernelized
Value
Function
Approximation

Added: 1388 days ago by
admin
Views: 1796
Comments: 0

( Not yet rated )

Arp spoofing using Arpspoof

3:54

ARP spoofing is an effective way to intercept, sniff, hijack and DoS connections. It is a more effective way of hijacking sessions, because it allows attackers to see incoming and outgoing communications, as if they were a proxy, as opposed to \\

Arp
spoofing
arpspoof

Added: 1842 days ago by
admin
Views: 1776
Comments: 0

( Not yet rated )

Machine Learning: Probabilistic Interpretation of Least-Squares

14:14

Sanjeev Sharma>This is 4th Lec in ML. In this lecture I presented the Probabilistic Interpretation Of Least Squares Regression. I explained the reason behind choosing least squares error function in the regression problem in Machine Learning. For this we assume the Gaussian (Normal) distribution of the error terms. Later part covers the relation b/w maximum likelihood and least squares. (I also provided the hint of results, that you will get if you use other distribution like Poisson and Laplacian, but this is a topic of Nu. Optimization therefore is not discussed in this lecture & will be discussed in Optimization Channel)

Machine
Learning
Maximum
Log
Likelihood
Probabilistic
Interpretation
Least
Squares
Optimization

Added: 1790 days ago by
admin
Views: 1776
Comments: 0

( Not yet rated )

Lecture 2-Line Search Methods Part 2

9:57

By Sanjeev>> Line Search is one of the ways of solving the problems of Unconstrained Optimization. In this lecture I will discuss about LS algorithm and then the Wolfe & Strong Wolfe conditions and their practical meaning through Matlab. Then I will prove the fact about step-length and derive necessary conditions for our function.

wolfe
line
search
optimization

Added: 1839 days ago by
admin
Views: 1768
Comments: 0

( Not yet rated )

Etherape

1:6

Prateek>EtherApe is a graphical network monitor for Unix modeled after etherman. Featuring link layer, ip and TCP modes, it displays network activity graphically. Hosts and links change in size with traffic. Color coded protocols display.
It supports Ethernet, FDDI, Token Ring, ISDN, PPP and SLIP devices. It can filter traffic to be shown, and can read traffic from a file as well as live from the network.

etherape
network
traffic

Added: 1815 days ago by
admin
Views: 1759
Comments: 0

( Not yet rated )

Reinforcement Learning: Iterative Algorithms & Single Agent Path Planning in Static Environment under FOMDPs

43:37

SANJEEV SHARMA : 18th Jan 2010: REINFORCEMENT LEARNING: Lecture - 3: ITERATIVE ALGORITHMS & SINGLE AGENT PATH PLANNING IN FOMDPs. (Fully observable MDPs).

CONTENTS:

Optimal Value Functions, Bellman Optimality Equation, Relation b/w Optimal Action value function and Optimal State-Value Function, Policy Evaluation, Policy Iteration, Value Iteration, Policy Improvement, Agent Path Planning in Static Environment in FOMDPs.

DESCRIPTION:

In this lecture first of all I mentioned few things from the previous lecture. Then I provided an introduction to optimal policies, details about the relationship b/w the Optimal State Value Function and OPtimal Action Value Function. Then I mentioned the Bellman Optimality Equation for both the State-Value function and Action-Value Function. Then I provided a brief overview of my Example in Agent\'s Path Planning in Static Environment in Fully Observable MDPs. Then I provided the details of 4 most important algorithms i.e. Policy Evaluation, Policy Improvement, Policy Iteration and Value Iteration.

Date-23rd January 2011: The code I had written approximately an year ago for this lecture: Path_Planning_Policy_Evaluation_Sanjeev.zip. Just run Sanjeev_Main_Path.m .

Reinforcement
Learning
Policy
Evaluation
Policy
Improvement
Policy
Iteration
Value
Iteration
Bellman
Optimality
Equation
Single
Agent
Path
Planning
FOMDPs
Static
Environment

Added: 1682 days ago by
admin
Views: 1753
Comments: 0

( Not yet rated )

Socket Programming In python Part-3

4:56

By Prateek Gianchandani on Oct 15

In this video i will show you how to create a Tcp-client using Sockets. The client will first connect to the server on the specified port and then flag a message confirming that the client has connected to the server.

Programming python socket

Added: 1777 days ago by admin Views: 1638 Comments: 0

( Not yet rated )

News

Interested in learning how Computer Forensics specialists work, deal with legal issues, gather evidence,and perform post mortem and live forensics? We recommend the course on Computer Foreniscs by Infosec Institute Read the review here.

Interested in receiving some ethical hacking training ? We recommend the course on Ethical Hacking by Infosec Institute.

Interested in learning more about Web Application Security ? We recommend the course on Web Application Penetration Testing by Infosec Institute.

Interested in learning more about the most advanced tools and techniques used in securing and breaking into networks these days. We recommend the course on Advance Ethical Hacking by Infosec Institute.

Interested in learning more about the latest trends in Information Security ? We recommend the course on Security Awareness for IT Users by Infosec Institute.

In environments where GPS is noisy and maps are unavailable, such as indoors or in dense urban environments, a UAV (unmanned air vehicle) runs the risk of becoming lost, operating in high threat regions, or colliding with obstacles. The size, weight and budget limitations of micro air vehicles (MAVs) typically preclude high-precision inertial navigation units that can mitigate the loss of GPS. We are developing estimation and planning algorithms that allow MAVs to use environmental sensors such as range finders to estimate their position, build maps of the environment and fly safely and robustly.

Enjoy :-)

Google announced on Tuesday that the Internet giant is considering exiting the Chinese market after sophisticated online attacks targeted its systems to breach the Gmail accounts of pro-democracy activists.

In a post to its blog, the company stated that an investigation into an attack against the Gmail accounts of pro-democracy activists turned up evidence of a much broader assault against Google's systems. The attack -- first noticed in mid-December and considered "highly sophisticated and targeted" -- resulted in the "theft of intellectual property," the company stated. It's unclear from the statement whether the two Gmail accounts accessed by attackers constituted intellectual property.

Top officials from U.S. law enforcement and government agencies speaking at SC World Congress in New York this week said progress has been made in fighting cybercrime recently, but increased collaboration with individuals from the private sector and international law enforcement bodies is needed to keep up the momentum. Shawn Henry, assistant director of the FBI Cyber Division, said that efforts to cooperate with foreign law enforcement agencies have paid off in the fight against cybercriminals. Six years ago, for example, the FBI could not respond to an attack that was traced back to an individual outside of the U.S, Henry said. Today, FBI agents are working hand-in-hand with international law enforcement agents in Estonia, Romania and other countries to build cases against cybercriminals and make arrests.

Team MIT's TALOS autonomous vehicle, readied for the DARPA Urban Challenge, is outfitted with a multitude of sensors. The sensors scan and sweep across an intersection to show approaching and standing traffic as well as curbs and lane markings, which TALOS's algorithm must assess before continuing.

By forming a coordinated formation, it is possible to achieve flight integrity with less fuel consumption, increasing the possibility of a mission's success. Even with such unique flight capabilities, the helicopter teams will be confronted by very challenging situations. The potential for accidents is increased by requirements to fly in close formations and under harsh conditions including poor weather, extremely low altitudes, low visibility, extreme temperatures, noise, vibrations, blasts, flashes, radiation and battlefield air pollution. The effects of battlefield stress exerted on aircrew increases dramatically under these adverse circumstances. Therefore, computer-assisted autonomous formation flight can help to diminish battlefield stress. Reduction of pilots' stress directly means the extension of radius of action and minimizing uncertainties during movements.

Security researchers at Microsoft have found a way to break the end-to-end security guarantees of HTTPS without breaking any cryptographic scheme. Affected browsers include Microsoft’s Internet Explorer 8, Mozilla Firefox, Google Chrome, Apple Safari and Opera.During a research project concluded earlier this year, the Microsoft Research team discovered a set of vulnerabilities exploitable by a malicious proxy targeting browsers’ rendering modules above the HTTP/HTTPS layer.

It's just been a few short months since a proposed bill called for the creation of a National Cybersecurity Advisor, but it looks like there's now not one but two new positions in the offing, with both the Pentagon and President Obama himself announcing plans for some newly elevated offices charged with keeping the nation's networks secure. While a specific "Cybersecurity Czar" hasn't yet been named, the White House position will apparently be a member of both the National Security Council and National Economic Council and, in addition to coordinating U.S. response in the event of a major attack, the office will also be tasked with protecting privacy and civil liberties. Details on the new Pentagon office, on the other hand, are expectedly even less specific although, according to

Miscreants have developed one of most sophisticated click fraud malware applications to date. The Trojan code - dubbed FFsearcher by security firm SecureWorks - plugs into a Google API that allows webmasters to add a Google-powered search widget (called “Google Custom Search”) to their website. In normal use, search results made via the widget are displayed alongside Google AdSense ads, with webmasters receiving a small fee every time a surfer follows an ad.

The Central Intelligence Agency, PayPal, and hundreds of other organizations are under an unexplained assault that's bombarding their websites with millions of compute-intensive requests.

The "massive" flood of requests is made over the websites' SSL, or secure-sockets layer, port, causing them to consume more resources than normal connections, according to researchers at Shadowserver Foundation, a volunteer security collective. The torrent started about a week ago and appears to be caused by recent changes made to a botnet known as Pushdo.

Google's Self-Driving Car: Sebastian Thurn Talk

Autonomous Waypoint Generation Strategy for On-Line Navigation in Unknown Environments

High Speed On-Line Motion Planning In Cluttered Environments

Unconstrained Minimization: Steepest Descent Methods & Convergence Analysis

Unconstrained Minimization: Convergence Analysis of Gradient Descent Using Line Search

Autonomous Driving Sebastian Thrun Google Path Planning, Waypoint Generation, Navigation Online Motion Non-Holonimic Constraints Steepest Descent Gradient Coordinate Convergence Analysis Analysis, Condition Number, Descent, Unconstrained Minimization Backtracking Line Search, Exact Search Number Optimization SVM, Lagrange Dual, KKT-Conditions, Weak & Strong Duality, Slater's Constraint Qualification, Complementary Slackness. Kernel Machines Trick Perceptron Voted Methods, Reinforcement Learning, Kernelized Value Function Approximation Geometric-Analysis BRM Bellman Residual Fixed-Point Methods LSPI, Solution, Operator, Acrobot, Chain-Walk-Domain " Least-Squares Temporal Difference Learning", LSTD, "Reinforcement "Value Approximation", "Linear Approximation" Learning Monte Carlo SARSA Q-Learning R-Learning Actor-Critic 1-Step TD(0) QCQP, SDP, SOCP, Navigation, Convex Programming, Continuous Environments Ellipsoidal Constrained Agent Planning On-Policy Off-Policy E-soft E-greedy Exploring Starts GPI Policy Evaluation Improvement Iteration Optimality Equation Single FOMDPs Static Environment Machine Kullback Leibler Divergence Perspective Epigraph Information Gain Entropy Differemtial Discrete Relative Functions Markov Property

Copyright © 2009-2014 Searching-Eye. All rights reserved.