\documentclass[10pt]{article}
\usepackage{fullpage}
\usepackage{setspace}
\usepackage{parskip}
\usepackage{titlesec}
\usepackage[section]{placeins}
\usepackage{xcolor}
\usepackage{breakcites}
\usepackage{lineno}
\usepackage{hyphenat}
\PassOptionsToPackage{hyphens}{url}
\usepackage[colorlinks = true,
linkcolor = blue,
urlcolor = blue,
citecolor = blue,
anchorcolor = blue]{hyperref}
\usepackage{etoolbox}
\makeatletter
\patchcmd\@combinedblfloats{\box\@outputbox}{\unvbox\@outputbox}{}{%
\errmessage{\noexpand\@combinedblfloats could not be patched}%
}%
\makeatother
\usepackage[round]{natbib}
\let\cite\citep
\renewenvironment{abstract}
{{\bfseries\noindent{\abstractname}\par\nobreak}\footnotesize}
{\bigskip}
\titlespacing{\section}{0pt}{*3}{*1}
\titlespacing{\subsection}{0pt}{*2}{*0.5}
\titlespacing{\subsubsection}{0pt}{*1.5}{0pt}
\usepackage{authblk}
\usepackage{graphicx}
\usepackage[space]{grffile}
\usepackage{latexsym}
\usepackage{textcomp}
\usepackage{longtable}
\usepackage{tabulary}
\usepackage{booktabs,array,multirow}
\usepackage{amsfonts,amsmath,amssymb}
\providecommand\citet{\cite}
\providecommand\citep{\cite}
\providecommand\citealt{\cite}
% You can conditionalize code for latexml or normal latex using this.
\newif\iflatexml\latexmlfalse
\providecommand{\tightlist}{\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}%
\AtBeginDocument{\DeclareGraphicsExtensions{.pdf,.PDF,.eps,.EPS,.png,.PNG,.tif,.TIF,.jpg,.JPG,.jpeg,.JPEG}}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\begin{document}
\title{Applied 1 Review}
\author[1]{David Kozak}%
\affil[1]{Colorado School of Mines}%
\vspace{-1em}
\date{\today}
\begingroup
\let\center\flushleft
\let\endcenter\endflushleft
\maketitle
\endgroup
\sloppy
~In this review I have pulled variously from Evans'~\emph{Partial
Differential Equations,~}Haberman's~\emph{Applied Partial Differential
Equations with Fourier Series and Boundary Value Problems,} and our
textbook by Guenther and Lee:~ \emph{Partial Differential Equations of
Mathematical Physics and Integral Equations}. The key, it seems, is to
begin by identifying what type of PDE we are trying to solve. This will
determine what methods we are able to use, what the solution might look
like (uniqueness?), if there is a solution (existence?), and what to do
if there isn't (approximation). So far we have studied various methods
to use supposing there is a solution. We begin with the~\textbf{method
of characteristics,~}which is used to solve first order PDEs. It works
for linear, nonlinear and quasilinear curves. It is generally used to
solve first order PDEs but can be used to solve any~\textbf{hyperbolic}
partial differential equation that has a solution. We have also briefly
covered \textbf{Fourier Series}, as well as their infinite extension,
the~\textbf{Fourier Transform.~}The Fourier series is used to
approximate a function~\emph{in a finite domain~}so that we can solve
the PDE. The Fourier transform is used to simplify a system of equations
by either turning a PDE into an ODE or and ODE into an analytic
equation. We learned about~\textbf{Separation of Variables}, a method
for solving PDEs which are homogeneous or not.~
In addition to being able to solve the PDEs, it is important to be able
to classify them for the eventuality when we can not. Also it is good to
know (a) when a solution exists, (b) whether the existing solution is
unique, and (c) whether the problem is well posed. Where we have answers
to these questions we will address them.
\section*{Types of Second Order PDEs}
{\label{919565}}
Consider a second order PDE which has the following form:
\par\null
\begin{equation}
A\frac{\partial^2 u(x,y)}{\partial x^2} + B \frac{\partial^2 u(x,y)}{\partial x\partial y} + C \frac{\partial^2 u(x,y)}{\partial y^2} + (lower~order~terms) = 0 \end{equation}
where the lower order terms are dominated by the second order terms and
become inconsequential. Then we can say the following:
The equation is \textbf{hyperbolic} if $B^2-AC > 0$. These are the only type of problem which we have learned to solve. The equation is \textbf{parabolic} if $B^2=AC$, this corresponds for instance to the heat equation: $k\frac{\partial^2 u}{\partial x^2} = \frac{\partial u}{\partial t}$. The equation is \textbf{elliptic} is $B^2-AC <0$. An example is the Laplace equation, $\nabla^2u=0$. They are best for static problems, an example being the steady state of the heat equation.
\subsection*{}
{\label{868464}}
\section*{Method of Characteristics}
{\label{697570}}
\subsection*{Quasi-linear case}
{\label{517600}}
The idea behind the method of characteristics is to determine curves
along which the PDE becomes an ODE which we can solve . The ODE can then
be solved along with the initial conditions to provide a solution to the
PDE.~ Consider the quasi-linear PDE
\begin{equation*} A(x,y,u)\frac{\partial u(x,y)}{\partial x} + B(x,y,u)\frac{\partial u(x,y)}{\partial y} + C(x,y,u) = F(x,y,u,p,q) =0, \end{equation*}
where p = $\frac{\partial u}{\partial x}$ and q = $\frac{\partial u}{\partial y}$, and A, B, C are all functions of x, y, u. As mentioned, the goal is to make it a series of ODEs so we reparameterize $x, y$, and $u$ as functions of $s, t$. An example serves to clarify:
\begin{enumerate}
\item Solve the following
\begin{equation*}
xu_x + yu_y = 1
\end{equation*}
subject to
\begin{equation*}
u(x,0) = e^x
\end{equation*}
\end{enumerate}
As mentioned, we reparameterize. Beginning with the initial conditions:
\begin{align*}
x(t=0, s) &= s \\
y(t=0, s) &= 0 \\
u(t=0, s) &= e^s.
\end{align*}
We use the fact that $\frac{\partial x}{\partial t} \frac{\partial u}{\partial x} = \frac{\partial u}{\partial t}$ to set $\frac{\partial x}{\partial t} = A(x,y,u), \frac{\partial y}{\partial t} = B(x,y,u), \frac{\partial u}{\partial t} = C(x,y,u),$ so that we have the following ODEs:
\begin{align*}
\frac{\partial x}{\partial t} &= x \\
\frac{\partial y}{\partial t} &= 1 \\
\frac{\partial u}{\partial t} &= 1.
\end{align*}
We solve the ODEs to get,
\begin{align*}
x &= se^t, \\
y &= t, \\
u &= t + e^s.
\end{align*}
And reparameterize back by noting that $t = y$ and $s = xe^{-y}$. Plugging back in for $u$ we get the solution: $u = y + e^{xe^{-y}}$.
\subsection*{Nonlinear case}
{\label{835587}}
We have a similar setup to the quasi-linear case except that~\emph{p}
and~\emph{q} are not linearly related. As one might imagine this
complicates matters, but the procedure is nearly identical.~ Once again
we have a system:
\begin{equation}\label{nonlinearcharacteristics} A(x,y,u,p,q)\frac{\partial u(x,y)}{\partial x} + B(x,y,u,p,q)\frac{\partial u(x,y)}{\partial y} + C(x,y,u,p,q) = F(x,y,u,p,q) =0, \end{equation}
where $p = \frac{\partial u}{\partial x}$ and $q = \frac{\partial u}{\partial y}$, and $A, B, C$ are all functions of $x, y, u, p, q$. Note also the $u_{xy}=p_y=q_x=u_{yx}$. The process follows logical steps but is slightly different, once again an example serves to illustrate:
\begin{enumerate}
\item
Solve the following,
\begin{equation}\label{problem}
u_xu_y=1,
\end{equation}
subject to the initial conditions,
\begin{equation*}
u(x,0)=\ln x.
\end{equation*}
\end{enumerate}
The easy part is to reparamaterize beginning with the initial conditions. To do this we must recognize that there are no initial conditions for $p$ and $q$, so we must determine what they should be. We create the following system of equations:
\begin{align}\label{forpq}
F(x_0, y_0, u_0, p_0, q_0 )&= 0 \\
p_0\frac{\partial x_0}{\partial s} + q_0\frac{\partial y_0}{\partial s} &= \frac{\partial u_0}{\partial s}
\end{align}
where $x_0 \equiv x(t=0, s)$. The latter comes from the chain rule, the former is just a conveniently chosen point on the curve. We then use this and equation (\ref{problem}) to solve the initial conditions:
\begin{align*}
x(t=0, s) &= s, \\
y(t=0, s) &= 0, \\
u(t=0, s) &= ln(s), \\
p(t=0, s) &= 1/s, \\
q(t=0, s) &= s.
\end{align*}
Using poorly explained voodoo magic we note that $\frac{\partial x}{\partial t}= F_p$ , $\frac{\partial y}{\partial t}= F_q$, $\frac{\partial u}{\partial t}= pF_p + qF_q$, $\frac{\partial p}{\partial t}= -(F_x + pF_u)$, and $\frac{\partial q}{\partial t}= -(F_y + qF_u)$. Plugging these values in, we get
\begin{align*}
\frac{\partial x}{\partial \tau } = F_p &= q, \\
\frac{\partial y}{\partial \tau } = F_q &= p, \\
\frac{\partial u}{\partial \tau } = pF_p + qF_q &= 2pq, \\
\frac{\partial p}{\partial \tau } = -(F_x+pF_u) &= 0, \\
\frac{\partial q}{\partial \tau } = -(F_y+qF_u) &= 0. \\
\end{align*}.
We go through the same process of determining the relationship between $x,y$ and $s, t$ to get the solution of the problem.
\subsection*{Existence and Uniqueness for~First Order Nonlinear
PDEs}
{\label{229940}}
While it is important to understand how to solve problems, the ability
to characterize the problems can save time and provide much insight
prior to an attempt to solve them. We seek sufficient conditions to
ensure that the approaches outlined above are guaranteed to provide a
solution. Furthermore, we examine conditions under which the provided
solution is unique.~
\textbf{Existence}
It seems to me that the textbook does a bad job of explaining~\emph{why}
they make certain requirements. In particular we require that
\begin{equation*} \frac{\partial(x,y)}{\partial(s,t)} = \begin{vmatrix} x_s & x_t \\ y_s & y_t \end{vmatrix}_{t=0} \neq 0 \end{equation*}
which ensures that the relationship between $(x, y)$ and $(s,t)$ is invertible, and that we can uniquely move back and forth between these two parametrizations. Furthermore we must require that $F(x,y,u,p,q)$ is twice continuously differentiable with respect to each of its arguments in a domain $D$ of the space, and that $F^2_p + F^2_q \neq 0$ in D. This first requirement because we take multiple derivatives, and the second just ensures that it is in fact a PDE (consider $F_p^2=F_q^2$, then $\frac{\partial x}{\partial t}=\frac{\partial y}{\partial t}$ up to the sign, in which case we are solving an ODE!). Finally, the curve given by the initial conditions must be twice continuously differentiable.
\textbf{Uniqueness}
Uniqueness can be determined at multiple stages. Consider the case when
the linear system in equation (3)~ is uniquely solvable
for~\emph{p}\textsubscript{0},\textsubscript{~}\emph{q\textsubscript{0}},
then the solution is unique. In the case when the linear system is not
unique, we have multiple unique solutions, one for each
(\emph{p}0,~\emph{q0)}. If there is no solution to equation (3) then
there is no solution.~
\section*{Fourier Series}
{\label{359412}}
Interesting for a number of reasons, Fourier series are integral to the
solution of many PDEs. We will make use of them in solving separation of
variables problems, but for now will investigate their properties. The
Fourier series is a series approximation of a
function,~\(f\left(x\right)\), using trigonometric functions. Under
certain conditions the infinite series~ will converge, and when it does
we know which values it will converge to.~ We begin with some
definitions:
\textbf{Definition: (2L periodic)~}A function is said to be~\emph{2L
periodic\textbf{~}}if~\(f\left(x\right)=f\left(x+2L\right)\ \forall x\) in the domain.
\textbf{Definition: (piecewise continuity)~}A function is said to
be~\emph{piecewise continuous}~on an interval {[}a, b{]} if it
continuous on except at a finite number of points in {[}a,b{]}, where is
has jump discontinuities.
\textbf{Definition: (piecewise smooth)~}A function is said to
be~\emph{piecewise smooth~}on an interval {[}a, b{]} if
both~\(f\left(x\right)\) and~\(f'\left(x\right)\) are piecewise continuous
on {[}a,b{]}.
\textbf{Definition: (uniform convergence)~}A sequence of
functions~\(\left\{g_n\left(x\right)\right\}\) is said to~\emph{converge uniformly on a
set S}~to a function~\(g\left(x\right)\)
if~\(\)given~\(\epsilon>0\) there corresponds a
number~\(N\) such that for all x in S, n\textgreater{}N
implies that~\(\left|g_n\left(x\right)-g\left(x\right)\right|<\epsilon\).~
These definitions lead to an important theorem, called~\emph{Dirichlet's
Theorem:}
\textbf{Theorem: (Dirichlet's Theorem)}
Let $f$ be 2L periodic and piecewise smooth. Then the Fourier series of $f$ converges for each $x$ in $(-\infty, \infty)$ to $\frac{f(x^+) + f(x^-)}{2}$. Consequently, if $f$ is continuous at the point $x$, its Fourier series converges to $f(x)$. If in addition, $f$ is continuous for all $x$, its Fourier series converges absolutely and uniformly to $f$ on $(-\infty, \infty)$
Dirichlet's theorem is important because it highlights necessary
conditions of a function to make Fourier series useful(i.e. convergent).
These conditions satisfied, we know that the Fourier series converges in
a very strong sense, a fact that is useful computationally as well as
mathematically. We have discussed, but not really shown why we might
like Fourier series, but we have not yet defined them. We begin with
finite sums, the only computationally feasible way of doing it:
Let $f(x)$ be a function, $S_n(x)$ be the $n^th$ partial sum of the Fourier series. Then,
\begin{equation*}
S_n(x) = \frac{1}{2} a_0 + \sum_{k=1}^n \left[ a_k cos\left(\frac{k\pi x}{L} \right) + b_k sin\left(\frac{k\pi x}{L} \right) \right]
\end{equation*}
The Fourier series itself results from letting $n \to \infty$. Dirichlet's theorem suggests that once we there exists an $N(\epsilon)$ such that for $n > N(\epsilon)$ we are within $\epsilon$ of the exact solution. This is fantastic, except that we have not yet determined how to calculate the constants $a_0, a_k, b_k$. To do this, we rely on the orthogonality of sines and cosines. In particular,
\begin{equation*}
\int_{-L}^{L} sin \left(\frac{m\pi x}{L} \right) sin \left(\frac{n\pi x}{L} \right) dx = \int_{-L}^{L} cos \left(\frac{m\pi x}{L} \right) cos \left(\frac{n\pi x}{L} \right) dx = 0 , \quad \text{for } m \neq n
\end{equation*}
and
\begin{equation*}
\int_{-L}^{L} sin \left(\frac{m\pi x}{L} \right) cos \left(\frac{n\pi x}{L} \right) dx= 0 , \quad \text{for all } m, n
\end{equation*}
and the important identities
\begin{equation*}
\int_{-L}^{L} sin^2 \left(\frac{n\pi x}{L} \right) dx = \int_{-L}^{L} cos^2 \left(\frac{n\pi x}{L} \right) dx = L \quad \text{for } n\geq 1
\end{equation*}
These are very useful. For instance, by Dirichlet's theorem suggests that we can write:
\begin{equation*}
f(x) = \frac{1}{2} a_0 + \sum_{k=1}^{\infty} \left[ a_k cos\left(\frac{k\pi x}{L} \right) + b_k sin\left(\frac{k\pi x}{L} \right) \right]
\end{equation*}
now if we multiply both sides by $cos\left(\frac{n\pi x}{L}\right)$ and integrate from $-L$ to $L$ over x we get
\begin{equation*}
\int_{-L}^{L} f(x)cos\left(\frac{n\pi x}{L}\right) dx= \int_{-L}^{L} \frac{1}{2} a_0 cos\left(\frac{n\pi x}{L}\right) + \sum_{k=1}^{\infty} \left[ a_k cos\left(\frac{k\pi x}{L} \right) + b_k sin\left(\frac{k\pi x}{L} \right) \right] cos\left(\frac{n\pi x}{L}\right) dx
\end{equation*}
On the right hand side the first term will disappear. As will the last term due to the orthogonality between $sin$ and $cos$. Thus due to the identity above, on the RHS we are left with $a_n L$. Dividing both sides by L gives us
\begin{equation*}
a_n = \frac{1}{L} \int_{-L}^{L} f(x) cos \left(\frac{n \pi x}{L} \right) dx, \quad\text{for} \quad n = 0, 1, 2, ...
\end{equation*}
Identical calculations show that
\begin{equation*}
b_n = \frac{1}{L} \int_{-L}^{L} f(x) sin \left(\frac{n \pi x}{L} \right) dx, \quad\text{for} \quad n = 1, 2, 3 ...
\end{equation*}
\section*{}
{\label{359412}}
\subsection*{Least Squares and Uniform
Approximation~}
{\label{163385}}
The entirety of importance in this section, it seems to me, is found in
the~\emph{Weierstrass Approximation Theorem. ~}The theorem preceding it,
which suggests that the \$n\^{}\{th\}\$ partial sum of a Fourier series
provides the best least-squares approximation to a square integrable
function \$f(x)\$ on a finite domain among all trigonometric polynomials
of degree at most n seems irrelevant to me. I can't imagine a
circumstance, if I was looking for a good least squares approximation,
when I would restrict the function class to trigonometric polynomials.
So we skip it and move to the more important theorem, and an important
result that arises as an application,
\textbf{Theorem: (Weierstrass Approximation Theorem)}
Let $\epsilon > 0$ be arbitrary and $f(x)$ be a $2\pi$ periodic continuous function. then there is a trigonometric polynomial $T(x)$ such that $\mid f(x) - T(x) \mid < \epsilon$ for all $x$.
We omit the proof for lack of relevance, though it is instructive. There are many ways to prove the following result, the authors use both the Weierstrass approximation and omitted theorem to suggest that if $f(x)$ is a $2\pi$ periodic continuous function then the Fourier series of $f(x)$ converges to $f(x)$ in $L^2$. The proof is simple but omitted here. Finally, the text concludes the chapter with important results of uniform convergence in exchanging limits and integrals. As this is covered extensively in analysis, and in probability, we do not cover the results here.
\par\null
\section*{Fourier Transform}
{\label{359412}}
While the Fourier series is important, it has an important limitation: it can not handle the case when the model has infinite extent. We provide (again, unfortunately, without proof) the transform itself along with an example. The \emph{Fourier transform} of $f(x)$ is defined by
\begin{equation*}
\hat{f}(\omega) = \int_{-\infty}^{\infty} e^{i\omega x}f(x)dx
\end{equation*}
whenever the integral exists. Its inverse is given by
\begin{equation*}
f(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-i\omega x}\hat{f}(\omega)d\omega
\end{equation*}
Similarly to the Fourier series, $f(x)$ is only recovered if it is continuous, piecewise smooth, and absolutely integrable. If we omit the condition of continuity then we get
\begin{equation*}
\frac{f(x_+) + f(x_-)}{2} = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-i\omega x}\hat{f}(\omega)d\omega
\end{equation*}
To establish uniform and absolute convergence and allow us to exchange limits, sums, derivatives and integrals we use the \emph{Weierstrass M-test} from Calculus. This is used heavily in proofs, but as the course is very applied and this is meant to be a studyguide we will omit it in favor of an example. Prior to the example we make note that the Fourier transform is also used in probability theory: if a distribution admits a probability density function (i.e. it is Lebesgue integrable!) then the Fourier transform of that probability is called the characteristic function. It is unique and therefore frequently used to show that a function has a particular distribution. Furthermore we make note of the important \emph{Plancherel Theorem} which suggests that if the Fourier transform exists then $\|f(x)\|_{L^2} = \|\hat{f}(\omega)\|_{L^2}$.
\begin{enumerate}
\item
Solve the following problem by using the Fourier transform:
\begin{align*}
u_{tt} = c^2u_{xx}, \qquad & -\infty < x < \infty, t > 0 \\
u(x, 0 ) = f(x), \quad u_t(x,0)=0 , \qquad & -\infty < x < \infty
\end{align*}
Where c is a constant and $f(x)$ is any function.
We recall that the Fourier transform with respect to x is,
\begin{equation}
\hat{u}(k, t) = \int_{-\infty}^{\infty} u(x,t)e^{-ikx} dx.
\end{equation}
In order to cement the usefulness of the Fourier transform, we derive the Fourier transform of an arbitrary function $\frac{\partial g(x)}{\partial x}$ via integration by parts:
\begin{align*}
\int_{-\infty}^{\infty} g'(x)e^{-ikx} dx &= e^{-ikx}g(x)|_{-\infty}^{\infty} + ik \int_{-\infty}^{\infty} g(x) e^{-ikx} dx, \\
&= ik \int_{-\infty}^{\infty} g(x) e^{-ikx} dx, \\
&= ik \hat{g}(k). \\
\end{align*}
Where we imposed the restriction on g that $\lim_{x\to \pm\infty} g(x) = 0$. Performing this integration twice leaves us with
\begin{equation*}
\frac{\partial^2 \hat{u}(k,t)}{\partial x^2} = -k^2\hat{u}(k,t).
\end{equation*}
We perform a similar but simpler derivation for the time derivative using the basic definition of the derivative:
\begin{align*}
\int_{-\infty}^{\infty} \frac{\partial u(x,t)}{\partial t} e^{-ikx} dx &= \lim_{h\to 0} \frac{1}{h} \int_{-\infty}^{\infty} u(x,t+h) e^{-ikx} - u(x,t) e^{-ikx} dx ,\\
&= \lim_{h\to 0} \frac{1}{h} \left(\hat{u}(k,t+h) - \hat{u}(k,t) \right),\\
&= \frac{\partial \hat{u}(k,t)}{\partial t}.
\end{align*}
As before we must perform this integration twice, with the same result. Our problem has now been translated to a second order ODE; namely,
\begin{equation*}
\frac{\partial^2 \hat{u}(k,t)}{\partial t^2} = -c^2k^2 \hat{u}(k,t).
\end{equation*}
The fact that it is a second order homogeneous ODE leads us to its general solution:
\begin{equation*}
\hat{u}(k,t) = \hat{A} \sin(ckt) + \hat{B} \cos(ckt),
\end{equation*}
We use the initial condition $u_t(x,0)=0$ to note that $\hat{A}$ must equal zero, and use Euler's formula to put our cosine function in a useful format.
\begin{equation*}
\hat{u}(k,t) = \hat{B_1} e^{ickt} + +\hat{B_2}e^{-ickt}
\end{equation*}
Of course, we must translate our solution back to the space domain with the inverse Fourier transform. We make note that the constants can be made functions of k for ease in the inverse Fourier transform,
\begin{align*}
u(x,t) &= \frac{1}{2\pi} \int_{-\infty}^{\infty} \hat{u}(k,t)e^{ikx} dk \\
&= \frac{1}{2\pi} \int_{-\infty}^{\infty} (\hat{B}(k) e^{ickt} + \hat{B}(k)e^{-ickt})e^{ikx} dk\\
&= \frac{1}{2\pi} \int_{-\infty}^{\infty} \hat{B}(k) e^{ik(x-ct)} +\hat{B}(k)e^{-ik(x-ct)} dk \\
&= B(x-ct) + B(x+ct)
\end{align*}
Thus, we use the initial conditions to provide the general solution
\begin{equation*}
u(x,t) = \frac{f(x+ct) + f(x-ct)}{2}
\end{equation*}
\end{enumerate}
\section*{}
{\label{359412}}
\section*{Separation of Variables}
{\label{359412}}
At last we are able to provide a use for Fourier series via the
ubiquitous separation of variables method for solving PDEs. The idea
behind the method is very simple. Using our notation
where~\(u\) is the solution to a PDE, we try to rewrite it
as the sum or product of some constituent functions (the separation of
variables step), we then use these simpler functions to find a solution
to~\(u\). It is most easily understood with examples.
\subsection*{Homogeneous Initial, Boundary Value
Problems}
{\label{657394}}\par\null
Solve by separation of variables,
\begin{align*}
u_{tt} &= c^2 u_{xx}, \qquad &x\in (0,L), t \geq 0 \\
u(x,0) &= f(x), \quad u_t(x,0) = g(x), \qquad &x \in (0,L) \\
u(0,t) &= 0, \quad u_x(L,t) = 0, \qquad &t \geq 0.
\end{align*}
We begin by expressing $u(x,t)$ as a product of two functions, $X(x), T(t)$. We now have the form,
\begin{equation*}
X(x)T''(t) = c^2 T(t)X''(x),
\end{equation*}
which can be turned into two ODEs by setting the functions of each variable to its own side, and setting both equal to some constant, $K$:
\begin{equation*}
\frac{X''}{X} = \frac{T''}{c^2T} = K.
\end{equation*}
We solve first the ODE for X, which is of the form:
\begin{equation*}
X'' + KX = 0.
\end{equation*}
Of course, there is a trivial solution $X\equiv0$. We recognize that a nontrivial solution is ossible only if $K<0$. Thus, we define $\lambda > 0$ and let $K= -\lambda^2$, resulting in
\begin{equation}\label{xpde}
X'' = \lambda^2 X = 0,
\end{equation}
with initial conditions,
\begin{equation*}
X(0) = 0, \quad X'(L) = 0.
\end{equation*}
The general solution of equation (\ref{xpde}) is
\begin{equation*}
X(x) = A \cos(\lambda x) + B \sin(\lambda x),
\end{equation*}
and applying the first initial condition we get that A = 0 so that
\begin{equation*}
X(x) = B \sin(\lambda x).
\end{equation*}
The second initial condition stipulates that $\lambda B \cos(\lambda L) = 0$ so that $\lambda L = \pi (n-\frac{1}{2})$ for $n ={1, 2, ...}$. That is, $\lambda_n = \frac{\pi (n-1/2)} {L}$. So for a constant $B\neq 0$,
\begin{equation}
X = B_n \sin(\lambda_n x),\qquad \lambda_n = \frac{\pi (n-\frac{1}{2}) } {L}, n = {1, 2, \ldots}.
\end{equation}
We continue, solving the ODE for T. This ODE is of the form
\begin{equation*}
T'' + \lambda_n^2 c^2 T = 0,
\end{equation*}
with general solution
\begin{equation*}
T = C \cos (\lambda_n c t) + D \sin(\lambda _ n c t)
\end{equation*}
and initial conditions
\begin{equation*}
u(x,0) = f(x), \quad u_t(0) = g(x,0).
\end{equation*}
Before applying initial conditions we un-separate the variables. Recognizing that the constants are arbitrary so that we can combine them, this results in
\begin{equation}\label{sum} u_n(x,t) =\left[ C_n \cos (\lambda_n c t) + D_n \sin(\lambda _ n c t)\right] \sin(\lambda_n x)
\end{equation}
Using the boundary conditions and recognizing that the wave equation and boundary conditions are both linear we have
\begin{align*}
u(x,0) =f(x)&= \sum_{n=1}^{\infty} C_n \sin\left(\lambda_n x\right) \\
u_t(x,0) =g(x)&= \sum_{n=1}^{\infty} \lambda_nD_n c \sin\left(\lambda_n x\right) \\
\end{align*}
Which are the Fourier sine series expansions for f(x) and g(x). We can thus choose the Fourier sine coefficients for $f(x)$ and $g(x)$ to be:
\begin{equation}\label{cn}
C_n = \frac{2}{L}\int_0^L f(x) \sin\left(\lambda_nx\right) dx
\end{equation}
and,
\begin{equation}\label{dn}
D_n = \frac{2}{\lambda_ncL}\int_0^L g(x) \sin\left(\lambda_n x\right) dx.
\end{equation}
Our solution is
\begin{equation*}
u(x,t) = \sum_{n=1}^{\infty}u_n(x,t)
\end{equation*}
Where $u_n$ is given by equation (\ref{sum}), and the coefficients are given by equations (\ref{cn}) and (\ref{dn})
\par\null
\subsection*{Inhomogeneous, Damped Initial Boundary Value
Problems}
{\label{657394}}
Technically this ought to be dealt with in two separate subsections, and
we address the damped case independently of the inhomogeneous case as we
work through it,~ and string them together once both are well
understood. We begin with an example of the damped case:~
~
\begin{enumerate}
\item Consider the equation
\begin{equation} \label{damped}
u_{tt} + ku_t = c^2 u_{xx}.
\end{equation}
with initial and boundary conditions
\begin{align*}
u(x,0)=f(x) \qquad u_t(x,0)= g(x), \qquad 0\leq x\leq L \\
u(0,t)=0 = u(L,t)=0, \qquad t\geq0.
\end{align*}
Where $k$ is known as the \emph{damping coefficient}. As before, we use separation of variables and rewrite equation (\ref{damped}) as
\begin{equation*}
X(x)T''(t) + kX(x)T'(t) = c^2 X''(x)T(t).
\end{equation*}
which amounts to
\begin{equation*}
\frac{X''}{X} = \frac{T'' +T'}{c^2T} = s
\end{equation*}
for some constant s. As in the undamped case we can solve for $X$ first, and use the initial conditions to get
\begin{equation*}
X(x) = B sin(\lambda_n x), \qquad \lambda_n = \frac{n\pi}{L}.
\end{equation*}
We then solve for T by rewriting it as
\begin{equation*}
T'' +kT' + (\lambda c)^2T = 0,
\end{equation*}
which is a typical variation of parameters problem. The characteristic polynomial of which is
\begin{equation*}
r^2 + kr + \lambda_n^2c^2 = 0
\end{equation*}
leading to roots $-\frac{k + \sqrt{k^2 - 4\lambda_n^2c^2}}{2}$, and $-\frac{k - \sqrt{k^2 - 4\lambda_n^2c^2}}{2}$ so that if we assume $k^2 - 4\lambda_n^2c^2=0$ and use Euler's identity we get \begin{equation*}
e^{-t\frac{k + \sqrt{k^2 - 4\lambda_n^2c^2}}{2}} + e^{-t\frac{k -\sqrt{k^2 - 4\lambda_n^2c^2}}{2}} = e^{-kt/2}\left[a_n cos(tk^2-4t\lambda_n^2 c^2) + b_n sin(tk^2-4t\lambda_n^2 c^2) \right].
\end{equation*}
We combine this with the solution for X to get
\begin{equation*}
e^{-kt/2}\sum_{n=0}^{\infty} \left[a_n cos(tk^2-4t\lambda_n^2 c^2) + b_n sin(tk^2-4t\lambda_n^2 c^2) \right] sin(\lambda_n x) = 0
\end{equation*}
Finally, we use the initial to find $a_n$ and $b_n$ to be
\begin{equation*}
a_n = \frac{2}{L}\int_0^L f(h) cos(\lambda_n h) dy, \qquad b_n = \frac{2}{L}\int_0^L f(h) sin(\lambda_n h) dh +\frac{k}{2}a_n,
\end{equation*}
\end{enumerate}
The inhomogeneous case is hardly different, as we make assumptions to make our lives easier. In physical terms, the inhomogeneous case has a force driving the wave. We use an example very similar to the previous one
\begin{enumerate}
\item \item Consider the equation
\begin{equation} \label{forced}
u_{tt} + ku_t = c^2 u_{xx} + F(x,t).
\end{equation}
with initial and boundary conditions
\begin{align*}
u(x,0)=f(x) \qquad u_t(x,0)&= g(x), \qquad 0\leq x\leq L \\
u(0,t)=0 &= u(L,t)=0, \qquad t\geq0.
\end{align*}
\end{enumerate}
Once again we split the X and the T components. X has the same solution, and if we assume the $F(x,t)$ has a series expansion we get
\begin{equation*}
F(x,t) = \sum_{n=1}^{\infty} F_n(t) sin(\lambda_n x), \qquad 0\leq x\leq L
\end{equation*}
This of course means that we can use orthogonality arguments to show that
\begin{equation*}
F_n(t) = \frac{2}{L} \int_0^L F(x,t)sin(\lambda_n x) dx.
\end{equation*}
Solving for T, we use the same as above, but this time we have $F_n(t)$ to contend with, giving
\begin{equation*}
\sum_{n=1}^{\infty} \left[u_n''(t) + ku_n'(t) + c^2\lambda_n^2u_n(t) - F_n(t)\right]sin(\lambda_n x) = 0
\end{equation*}
This is naturally the case when
\begin{equation*}
u_n''(t) + ku_n'(t) + c^2\lambda_n^2u_n(t) = F_n(t).
\end{equation*}
We need only insert the initial conditions and we are complete.
\selectlanguage{english}
\FloatBarrier
\end{document}