\documentclass{article}
\usepackage{fullpage}
\usepackage{parskip}
\usepackage{titlesec}
\usepackage{xcolor}
\usepackage[colorlinks = true,
linkcolor = blue,
urlcolor = blue,
citecolor = blue,
anchorcolor = blue]{hyperref}
\usepackage[natbibapa]{apacite}
\usepackage{eso-pic}
\AddToShipoutPictureBG{\AtPageLowerLeft{\includegraphics[scale=0.7]{powered-by-Authorea-watermark.png}}}
\renewenvironment{abstract}
{{\bfseries\noindent{\abstractname}\par\nobreak}\footnotesize}
{\bigskip}
\titlespacing{\section}{0pt}{*3}{*1}
\titlespacing{\subsection}{0pt}{*2}{*0.5}
\titlespacing{\subsubsection}{0pt}{*1.5}{0pt}
\usepackage{authblk}
\usepackage{graphicx}
\usepackage[space]{grffile}
\usepackage{latexsym}
\usepackage{textcomp}
\usepackage{longtable}
\usepackage{tabulary}
\usepackage{booktabs,array,multirow}
\usepackage{amsfonts,amsmath,amssymb}
\providecommand\citet{\cite}
\providecommand\citep{\cite}
\providecommand\citealt{\cite}
% You can conditionalize code for latexml or normal latex using this.
\newif\iflatexml\latexmlfalse
\providecommand{\tightlist}{\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}%
\AtBeginDocument{\DeclareGraphicsExtensions{.pdf,.PDF,.eps,.EPS,.png,.PNG,.tif,.TIF,.jpg,.JPG,.jpeg,.JPEG}}
\usepackage[utf8]{inputenc}
\usepackage[ngerman,english]{babel}
\begin{document}
\title{Welcome to Authorea!}
\author[ ]{OUDELHA}
\affil[ ]{}
\vspace{-1em}
\date{}
\begingroup
\let\center\flushleft
\let\endcenter\endflushleft
\maketitle
\endgroup
\subsection*{\texorpdfstring{\textbf{~M2 RAM ~}\\
\textbf{OUDELHA ~Arezki}\\
\textbf{A tutorial on support vector
regression}}{~M2 RAM ~ OUDELHA ~Arezki A tutorial on support vector regression}}\label{m2-ram-oudelha-arezki-a-tutorial-on-support-vector-regression}
\section*{\texorpdfstring{\textbf{~Introduction}}{~Introduction}}\label{introduction}
\selectlanguage{ngerman}\subsection*{\texorpdfstring{~ ~ ~The analyzed article is titled
«~\textbf{A tutorial on support vector regression~}» written by
\textbf{Alexander J. Smola} who is a professor in Carnegie Mellon
University at Pittsburgh USA~«~machine learning department~». And
\textbf{Bernhard Scholkopf} who is director in Max Planck Institute for
Intelligent Systems in GERMANY. It is Published by \textbf{Statistics
and Computing} in 2004 and manufactured in the NETHERLANDS.
\textbf{Statistics and Computing} as sitted in the oficiel site,~``is a
bi-monthly refereed journal that publishes papers covering the interface
between the statistical and computing sciences. The journal includes
techniques for evaluating analytically intractable problems, such as
bootstrap resampling, Markov chain Monte Carlo, sequential Monte Carlo,
approximate Bayesian computation,~ search and optimization methods,
stochastic simulation and Monte Carlo, graphics, computer environments,
statistical approaches to software errors, information retrieval,
machine learning, statistics of databases and database technology, huge
data sets and big data analytics, computer algebra, graphical models,
image processing, tomography, inverse problems and uncertainty
quantification''.\\
\hspace*{0.333em}\hspace*{0.333em}\hspace*{0.333em} ~The article talks
about SVM (Support Vector Machine) The authors structured their work as
follows: they began by giving us an overview and idea on Support Vector
(SV) machines and some examples of application then they added a small
summary of techniques and algorithms currently used for The training of
SV machines. We will see in the next parts the context of the article,
its positioning, contributions, ad finally we will try to experiment
some algorithms proposed by the authors.~\\
}{~ ~ ~The analyzed article is titled «~A tutorial on support vector regression~» written by Alexander J. Smola who is a professor in Carnegie Mellon University at Pittsburgh USA~«~machine learning department~». And Bernhard Scholkopf who is director in Max Planck Institute for Intelligent Systems in GERMANY. It is Published by Statistics and Computing in 2004 and manufactured in the NETHERLANDS. Statistics and Computing as sitted in the oficiel site,~is a bi-monthly refereed journal that publishes papers covering the interface between the statistical and computing sciences. The journal includes techniques for evaluating analytically intractable problems, such as bootstrap resampling, Markov chain Monte Carlo, sequential Monte Carlo, approximate Bayesian computation,~ search and optimization methods, stochastic simulation and Monte Carlo, graphics, computer environments, statistical approaches to software errors, information retrieval, machine learning, statistics of databases and database technology, huge data sets and big data analytics, computer algebra, graphical models, image processing, tomography, inverse problems and uncertainty quantification. ~~~ ~The article talks about SVM (Support Vector Machine) The authors structured their work as follows: they began by giving us an overview and idea on Support Vector (SV) machines and some examples of application then they added a small summary of techniques and algorithms currently used for The training of SV machines. We will see in the next parts the context of the article, its positioning, contributions, ad finally we will try to experiment some algorithms proposed by the authors.~ }}\label{the-analyzed-article-is-titled-a-tutorial-on-support-vector-regression-written-by-alexander-j.-smola-who-is-a-professor-in-carnegie-mellon-university-at-pittsburgh-usamachine-learning-department.-and-bernhard-scholkopf-who-is-director-in-max-planck-institute-for-intelligent-systems-in-germany.-it-is-published-by-statistics-and-computing-in-2004-and-manufactured-in-the-netherlands.-statistics-and-computing-as-sitted-in-the-oficiel-siteis-a-bi-monthly-refereed-journal-that-publishes-papers-covering-the-interface-between-the-statistical-and-computing-sciences.-the-journal-includes-techniques-for-evaluating-analytically-intractable-problems-such-as-bootstrap-resampling-markov-chain-monte-carlo-sequential-monte-carlo-approximate-bayesian-computation-search-and-optimization-methods-stochastic-simulation-and-monte-carlo-graphics-computer-environments-statistical-approaches-to-software-errors-information-retrieval-machine-learning-statistics-of-databases-and-database-technology-huge-data-sets-and-big-data-analytics-computer-algebra-graphical-models-image-processing-tomography-inverse-problems-and-uncertainty-quantification.-the-article-talks-about-svm-support-vector-machine-the-authors-structured-their-work-as-follows-they-began-by-giving-us-an-overview-and-idea-on-support-vector-sv-machines-and-some-examples-of-application-then-they-added-a-small-summary-of-techniques-and-algorithms-currently-used-for-the-training-of-sv-machines.-we-will-see-in-the-next-parts-the-context-of-the-article-its-positioning-contributions-ad-finally-we-will-try-to-experiment-some-algorithms-proposed-by-the-authors.}
\section*{~Context of the work}\label{context-of-the-work}
\subsection*{\texorpdfstring{\\
\hspace*{0.333em} ~ ~Supervised learning contains several techniques.
And among them, we find SVMs which are very effective to solve the
problems of discrimination and regression.\\
\hspace*{0.333em} ~ ~ SVM appeared in the 1990s on the basis and
extension of the Vapnik-Chervonenkis (VC) theory. And thanks to their
multiple effectiveness, ability to work with large data, the small
number of hyper parameters, their theoretical guarantees, and good
results in practice. SVM quickly integrate the world of statistics\\
\hspace*{0.333em} ~ SVM have a broad scope of application as(information
retrieval,~bioinformatics ~finance, computer vision, finance \ldots{}).
The performance of support vector machine can be compared to that of the
neural network or the Gaussian mixture model. And depending on the type
of data, SVM may be similar to these even see better\\
\hspace*{0.333em} ~SVMs can be used to solve data classification
problems, which is to decide which class belongs to a sample, or
regression problems, that means to represent a set of ~scattered data
~by a known functions or to predict the numerical value of a variable.
To solve these two problems we have to construct a function h which has
~as inputs, a vector x, to which it ~makes correspond an output y: ~~~~
Y = h (x)\\[2\baselineskip]\textbf{Example ~}\\[2\baselineskip]I have
tooken un example of un SVM application from Wikipedea to be more
explicite~\\
" Imagine a plane (two-dimensional space) which are distributed in two
groups of points. These points are associated with a group: points (+)
for y\textgreater{} x and points (-) for y \textless{}x we can find an
obvious linear separator in this example, the line y = x. The problem is
said to be linearly
separable.}{ ~ ~ ~Supervised learning contains several techniques. And among them, we find SVMs which are very effective to solve the problems of discrimination and regression. ~ ~ ~ SVM appeared in the 1990s on the basis and extension of the Vapnik-Chervonenkis (VC) theory. And thanks to their multiple effectiveness, ability to work with large data, the small number of hyper parameters, their theoretical guarantees, and good results in practice. SVM quickly integrate the world of statistics ~ ~ SVM have a broad scope of application as(information retrieval,~bioinformatics ~finance, computer vision, finance \ldots{}). The performance of support vector machine can be compared to that of the neural network or the Gaussian mixture model. And depending on the type of data, SVM may be similar to these even see better ~ ~SVMs can be used to solve data classification problems, which is to decide which class belongs to a sample, or regression problems, that means to represent a set of ~scattered data ~by a known functions or to predict the numerical value of a variable. To solve these two problems we have to construct a function h which has ~as inputs, a vector x, to which it ~makes correspond an output y: ~~~~ Y = h (x) Example ~ I have tooken un example of un SVM application from Wikipedea to be more explicite~ " Imagine a plane (two-dimensional space) which are distributed in two groups of points. These points are associated with a group: points (+) for y\textgreater{} x and points (-) for y \textless{}x we can find an obvious linear separator in this example, the line y = x. The problem is said to be linearly separable.}}\label{supervised-learning-contains-several-techniques.-and-among-them-we-find-svms-which-are-very-effective-to-solve-the-problems-of-discrimination-and-regression.-svm-appeared-in-the-1990s-on-the-basis-and-extension-of-the-vapnik-chervonenkis-vc-theory.-and-thanks-to-their-multiple-effectiveness-ability-to-work-with-large-data-the-small-number-of-hyper-parameters-their-theoretical-guarantees-and-good-results-in-practice.-svm-quickly-integrate-the-world-of-statistics-svm-have-a-broad-scope-of-application-asinformation-retrievalbioinformatics-finance-computer-vision-finance-.-the-performance-of-support-vector-machine-can-be-compared-to-that-of-the-neural-network-or-the-gaussian-mixture-model.-and-depending-on-the-type-of-data-svm-may-be-similar-to-these-even-see-better-svms-can-be-used-to-solve-data-classification-problems-which-is-to-decide-which-class-belongs-to-a-sample-or-regression-problems-that-means-to-represent-a-set-of-scattered-data-by-a-known-functions-or-to-predict-the-numerical-value-of-a-variable.-to-solve-these-two-problems-we-have-to-construct-a-function-h-which-has-as-inputs-a-vector-x-to-which-it-makes-correspond-an-output-y-y-h-x-example-i-have-tooken-un-example-of-un-svm-application-from-wikipedea-to-be-more-explicite-imagine-a-plane-two-dimensional-space-which-are-distributed-in-two-groups-of-points.-these-points-are-associated-with-a-group-points-for-y-x-and-points---for-y-x-we-can-find-an-obvious-linear-separator-in-this-example-the-line-y-x.-the-problem-is-said-to-be-linearly-separable.}
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\\
\subsection*{}\label{section}\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/2/2}
\end{center}
\end{figure}
\subsection*{For more complicated problems, there is generally no linear
separation. For example, imagine a plane in which the points (-) are
grouped in a circle with points (+) all around no linear separator
cannot properly separate groups the problem is not linearly separable.
There is no separating hyper-plane.}\label{auto-label-subsection-168594}\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/1/1}
\end{center}
\end{figure}
\subsection*{~~~~~~~ Now we consider the case where the problem is
linearly separable. Even in this simple case, the choice of separating
hyperplane is not obvious. Indeed, there is an infinity of hyperplanes
separators, the learning performance is identical (the empirical risk is
the same), but the generalization performance can be very different. To
resolve this problem, it has been shown that there exists a unique
optimal hyperplane defined as the hyperplane that maximizes the margin
between the samples and the separating
hyperplane.~~}\label{auto-label-subsection-382885}
\subsection*{~~ ~~~~~~~~~~There are theoretical reasons for this choice.
Vapnik showed that the capacity class of hyperplanes separators
decreases when the margin
increases.}\label{auto-label-subsection-957976}\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/4/4}
\end{center}
\end{figure}\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/Capture2/Capture2}
\end{center}
\end{figure}
\subsection*{~~ ~~~The margin is the distance between the hyperplane and
the nearest samples. These are called support
vector}\label{the-margin-is-the-distance-between-the-hyperplane-and-the-nearest-samples.-these-are-called-support-vector}
\textbf{~below, a few references that I took}\\
\subsection*{Vapnik, V. Statistical Learning Theory. Wiley-Interscience,
New
York,}\label{vapnik-v.-statistical-learning-theory.-wiley-interscience-new-york}
\selectlanguage{ngerman}\subsection*{\texorpdfstring{Bernhard Schölkopf, Alexander J.
Smola,~\emph{Learning With Kernels: Support Vector Machines,
Regularization, Optimization and Beyond},
2002}{Bernhard Schölkopf, Alexander J. Smola,~Learning With Kernels: Support Vector Machines, Regularization, Optimization and Beyond, 2002}}\label{bernhard-schuxf6lkopf-alexander-j.-smolalearning-with-kernels-support-vector-machines-regularization-optimization-and-beyond-2002}
\section*{\texorpdfstring{Positioning:\\
}{Positioning: }}\label{positioning}
\subsection*{\texorpdfstring{In this part, I tried to summarize at best
what alex smola and bernard scolkopf said in their article about the
positioning of the SVM.\\
The SV algorithm was born thanks to a succession of works developed by
Vladimir Vapnik. The latter began in the 1960s in Russia with the
generalized portrait algorithm and the theory of Vapnik and Chervonenkis
(VC) theory which is a statistical learning theory that characterizes
the properties of learning machines that allows them to generalize Well
to invisible data. The SVMs were developed mainly in the AT \& T
laboratories by vladimir vapnik and colleagues. We sitte some of them
(Boser, Guyon and Vapnik 1992 and 1993, Cortes and Vapnik 1995,
Schololopf, Burges and Vapnik 1995, 1996, Vapnik, Golowich and Smola,
1997).The work and the research on the SVM, are more oriented towards
applications of the real world. Focusing on OCR (Optical Character
Recognition) and they have become very used and have competed the best
systems available in the recognition of objects in a very limited
time.}{In this part, I tried to summarize at best what alex smola and bernard scolkopf said in their article about the positioning of the SVM. The SV algorithm was born thanks to a succession of works developed by Vladimir Vapnik. The latter began in the 1960s in Russia with the generalized portrait algorithm and the theory of Vapnik and Chervonenkis (VC) theory which is a statistical learning theory that characterizes the properties of learning machines that allows them to generalize Well to invisible data. The SVMs were developed mainly in the AT \& T laboratories by vladimir vapnik and colleagues. We sitte some of them (Boser, Guyon and Vapnik 1992 and 1993, Cortes and Vapnik 1995, Schololopf, Burges and Vapnik 1995, 1996, Vapnik, Golowich and Smola, 1997).The work and the research on the SVM, are more oriented towards applications of the real world. Focusing on OCR (Optical Character Recognition) and they have become very used and have competed the best systems available in the recognition of objects in a very limited time.}}\label{auto-label-subsection-485706}
~\\
~\\
\section*{~Contribution~:}\label{contribution}
\subsection*{~~ the article is a tuttoriel about SVM, explaning how to
use them, their limits and the authors gave us ~some Current ~algrithmes
to use. so this article contributes ~to well understand SVM and this is
a way that summarizes many works that have been made on these subject.
the article should serve as a self contained introduction to Support
Vector regression for readers that are ~new to ~field of research, and
which ~this rapidly developing one. ~On the other side, it attempts to
give us short overview recent works and rsearchs which are done on
it.}\label{the-article-is-a-tuttoriel-about-svm-explaning-how-to-use-them-their-limits-and-the-authors-gave-us-some-current-algrithmes-to-use.-so-this-article-contributes-to-well-understand-svm-and-this-is-a-way-that-summarizes-many-works-that-have-been-made-on-these-subject.-the-article-should-serve-as-a-self-contained-introduction-to-support-vector-regression-for-readers-that-are-new-to-field-of-research-and-which-this-rapidly-developing-one.-on-the-other-side-it-attempts-to-give-us-short-overview-recent-works-and-rsearchs-which-are-done-on-it.}
~\\
\section*{Experimentation}\label{experimentation}
\subsection*{\texorpdfstring{~\\
I took the following example of algorithm applied to the data from th
scikit learn platform, and I reproduced it whith python tool box after
that I changed some specifique parameters to the SVM algorithme
.}{~ I took the following example of algorithm applied to the data from th scikit learn platform, and I reproduced it whith python tool box after that I changed some specifique parameters to the SVM algorithme .}}\label{auto-label-subsection-281102}
\subsection*{~"Comparison of different linear SVM classifiers on a 2D
projection of the iris dataset. We only consider the first 2 features of
this dataset:}\label{auto-label-subsection-900982}
\selectlanguage{ngerman}\subsection*{·~~~~~~~~ Sepal length}\label{sepal-length}
\subsection*{·~~~~~~~~ Sepal width}\label{sepal-width}
\subsection*{This example shows how to plot the decision surface for four
SVM classifiers with different
kernels.}\label{this-example-shows-how-to-plot-the-decision-surface-for-four-svm-classifiers-with-different-kernels.}
\subsection*{The linear models~LinearSVC()~and~SVC(kernel='linear')~yield
slightly different decision boundaries. This can be a consequence of the
following
differences:}\label{the-linear-modelslinearsvcandsvckernellinearyield-slightly-different-decision-boundaries.-this-can-be-a-consequence-of-the-following-differences}
\subsection*{·~~~~~~~~ LinearSVC~minimizes the squared hinge loss
while~SVC~minimizes the regular hinge
loss.}\label{linearsvcminimizes-the-squared-hinge-loss-whilesvcminimizes-the-regular-hinge-loss.}
\subsection*{·~~~~~~~~ LinearSVC~uses the One-vs-All (also known as
One-vs-Rest) multiclass reduction while~SVC~uses the One-vs-One
multiclass
reduction.}\label{linearsvcuses-the-one-vs-all-also-known-as-one-vs-rest-multiclass-reduction-whilesvcuses-the-one-vs-one-multiclass-reduction.}
\subsection*{Both linear models have linear decision boundaries
(intersecting hyperplanes) while the non-linear kernel models
(polynomial or Gaussian RBF) have more flexible non-linear decision
boundaries with shapes that depend on the kind of kernel and its
parameters. boundaries with shapes that depend on the kind of kernel and
its
parameters."}\label{both-linear-models-have-linear-decision-boundaries-intersecting-hyperplanes-while-the-non-linear-kernel-models-polynomial-or-gaussian-rbf-have-more-flexible-non-linear-decision-boundaries-with-shapes-that-depend-on-the-kind-of-kernel-and-its-parameters.-boundaries-with-shapes-that-depend-on-the-kind-of-kernel-and-its-parameters.}
\subsection*{the following pictures shows th programme
proposed~}\label{auto-label-subsection-290955}\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/6/6}
\end{center}
\end{figure}\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/7/7}
\end{center}
\end{figure}\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/8/8}
\end{center}
\end{figure}
~\\\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/9/9}
\end{center}
\end{figure}\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/10/10}
\end{center}
\end{figure}\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/11/11}
\end{center}
\end{figure}
\textbf{~}\\
\section*{Conclusion:~}\label{auto-label-section-122414}
\subsection*{~~~~ The tuttoriel proposed by the alex.J Smola and bernard
scholskof, is a good introduction to understanding the SVM, but they
could not encompass all that ~there's to know about SVM and this is due
to large body of work done on this subject and the great number of
algorithm used in this field. But it remains a very efficient work and
that for its clarity and its detail and the accuracy of his results and
also for its optimal algorithms.}\label{auto-label-subsection-376204}
\selectlanguage{english}
\FloatBarrier
\end{document}