Cplot
In this homework, we will apply the Fast Fourier Transform to solving a wave-propagating problem using spectral methods. FFTW3 In order to use the Fast Fourier Transform, the c subroutine library FFTW3 is employed. The installation steps were well written for UNIX OS but for Windows and its many different compilers, the steps vary. We use a powerful development tool, QT Creator (https://www.qt.io/ide/), as a pure C++ compiler. Due to its popularity in numerical computation compared with MSVC, the steps of FFTW3 installation are unclear. First of all, we downloaded the pre-compiled FFTW3 subroutine library with _.dll_ and _.def_. Secondly, we had to use the built-in QT Command Prompt (MinGW for Desktop) to convert the _.dll_ files to usable libraries, _.lib_ or _.a_ in QT. lib /def:libfftw3-3.def lib /def:libfftw3f-3.def lib /def:libfftw3l-3.def Three _.lib_ files will be created and we can save all three as new _.a_ files for QT. Next, besides including _fftw3.h_, we need to link QT to the libraries by editing the project file, _.pro_. LIBS += “C:/Users/yc225/Desktop/Code/fftw/libfftw3-3.a” LIBS += “C:/Users/yc225/Desktop/Code/fftw/libfftw3f-3.a” LIBS += “C:/Users/yc225/Desktop/Code/fftw/libfftw3l-3.a” Finally, we need to copy the _.dll_ and _.exp_ files to the WINDOWS/SYSTEM32 folder. Only the 32-bit version of FFTW3 will work in this step even though my OS is Windows 64-bit. We encountered error messages _0xc00007b_ when we tried to install 64-bit fftw3. SPECTRAL METHODS The wave equation we would like to solve is v^2\left[ {\partial x^2}+{\partial y^2} \right]\psi = {\partial t^2} In order to use FFT, we can write ψ as the linear combination of different waves in the spectra, i.e. Fourier series form, \psi(,t) = a_{ij}(t)exp(i} \cdot ) Plugging into the partial differential equation, we can solve for aij(t). a_{ij}(t) = b_{ij}exp(iv|k_{ij}|t) + c_{ij}exp(-iv|k_{ij}|t) Once we have the complex amplitudes aij(t) at given t, we can do a inverse FFT to transform aij back to $\psi(,t)$. We can calculate bij and cij by assuming that at t = 0, the wave $\psi(, t=0)$ is at rest. {\partial t} = 0 = (b_{ij}-c_{ij})(iv|k_{ij}|)exp(iv|k_{ij}|t) Essentailly, bij = cij and aij(t) now becomes, a_{ij}(t) = 2b_{ij}cos(v|k_{ij}|t) At t = 0, we can see that 2bij is nothing but the Fourier Transform of $\psi(,t=0)$. Therefore, the plan to tackle this wave equation is to find 2bij and then use Eq. (3) to calculate aij(t) at different time before inverse transformation back to ψ. RESULTS The initial displacement of the 2D wave $\psi(,t=0)$ is the combination of three Gaussians. \psi(,t=0) = ^3 A_i exp\left[ --|^2}{s_i^2} \right] where xi = (0.4, 0.5, 0.6)Lx, yi = (0.4, 0.6, 0.4)Ly and si = (10, 20, 10) in meters, with Lx = Ly = 500m. The initial displacement is shown on the upper left of Fig. 1. Then, we would like to see the wave propagate as time passes. Shown in Fig. 1 are three wave displacements corresponding to three time points, t = 0.1s, t = 0.2s and t = 0.4s. The wavefront propagates outwards from the three Gaussian centers. During t = 0s to t = 0.2s three wave fronts interact with each other and the global contour is not clear but we can still distinguish which is which. At t = 0.2s, we can see that the displacement of the bigger Gaussian goes negative, implying the oscillating nature of the wave. However, at t = 0.4s, there wavefronts are clear and propagating outwards.
Random1
The purpose of this homework is to explore the Monte Carlo Algorithm and apply it to the simplified protein folding model in 2D. MONTE CARLO METHOD Monte Carlo Method uses the randomly generated possible solutions to a certain problem in a solution space and test its degree of goodness based on certain physical requirements. The ways of generating the possible solutions are usually two. First, we can generate the possible solutions totally at random. For example, we use random number generator to do Monte Carlo Integration. Second, we can generate the possible solutions from the previous step by randomly changing some parameters of the previous one. We will use the later one to generate our 2D protein structures in this homework. The general flow of Monte Carlo Method is shown as follows. Note that we use the term “conformation space” instead of “solution space” since we are talking about protein structures here. 1. Start from a initial state in the conformation space. 2. Randomly change the previous state, subjecting to requirement 1. 3. Determine the degree of goodness by criterion 2. 4. Accept/ reject this state by physical rule 3. 5. If it is accepted, pass this state and repeat 2 through 5 for certain number of steps. 6. If it is rejected, do not pass the state and repeat 2 through 4 until the new state is accepted. The requirement 1, criterion 2 and rule 3 are problem-specific and we will mention these in our protein folding problem. 2D PROTEIN FOLDING Proteins are composed of 20 different amino acids (AAs) in a polypeptide chain and due to the mutual interactions between those AAs, proteins will favor some folded states to lower the Gibbs free energy. The interactions are mostly negative because of hydrophobic effects or ion-ion interactions. In order for proteins to perform certain biological functions, their unique structures are essential. We can use a simple bead-and-chain model for a 2D protein chain, assuming that all the AAs are of the same size and the peptide bond between two AAs is rigid, being only one unit and unstretchable. Each AA occupies one grid point of the 2D space and cannot be in the same point of any other AAs. When protein folds, the non-covalent interactions apply to the two non-bonding AAs separate by one unit. An we can calculate the relative Gibbs free energy ΔG by summing all the interactions of non-bonding neighbors. \Delta G = E_0 = E_{t(i)t(j)} where (i, j) are the indices of two neighboring AAs of types (t(i),t(j)) and E is an 20 × 20 interaction matrix. With this model in mind, we can determine the requirements mentioned in the previous section. 1. Requirement 1: 1. The modified AA cannot occupy other’s positions. 2. The modified AA must be one unit away from its neighbor(s). 3. The best way to modify an AA’s position is to move (1, 0), (1, 1), (0, 1), ( − 1, 1), ( − 1, 0), ( − 1, −1), (0, −1) and (1, −1), eight possible changes. 2. Criterion 2: Evaluate the interaction energy, E₀ and use this number to determine the goodness of the state. The lower, the better. 3. Rule 3: 1. If the new state has lower E₀, the protein will adopt this state in order to reach the minimum of the folding landscape. 2. If the new state has higher E₀, the protein does not favor such state. However, there is still some probability to jump from lower energy state to higher energy ones, P = e−(Enew − E₀)/kT. Once the model is set and the steps are clear, we can start to do the simulation. PROGRAM CODES First, we need a general random number generator, _myrand(seed)_. We will test its validity and then apply it to alter the position of a randomly selected AA. Given different _seed_, the function will give different random number sequences. Our seeds for generating interaction matrix E and the AA sequence in the protein are two distinct yet fixed value. So we will guarantee that we use exactly the same protein and interactions throughout the calculation. Other than those, the seed will be set by _time(NULL)_ and independent of our bias. Second, there are several subroutines to do the Monte Carlo calculations and to make sure the protein is subject to some requirements. Note that the information of proteins is stored in a 45 × 3 matrix with the first column being AA types, second the x positions and third the y positions. 1. _neighbor()_: inputs a Protein Vector and outputs the pairs of indices of two non-bonding AAs. 2. _Energy()_: input pairs of neighbor indices, Protein Vector, interaction matrix E and outputs the energy E₀. 3. _n2ndistance()_: inputs a Protein Vector and outputs the end-to-end distance of the protein. 4. _pcheck()_: inputs Protein Vector and the index of certain AA and check if that AA occupies others’ positions. The function outputs _true_ if the protein is not allowed, _false_ otherwise. 5. _conformationchange()_: input Protein Vector and make a position change to one of its AA and outputs a modified new Protein Vector. RESULTS First we tested the random number generator _myrand()_. The random number generator gives a uniform distribution of numbers between 0 and 1. And the points (xn + 1, xn) cover the 1 × 1 square without noticeable patterns as shown in Fig. 1. We further test it by estimating π. {4} = }{N} where Nin is the number of points in the quarter circle and N is the total number of points.As the total number of points increases, the RHS will reach ${4}$ asympotically, as shown in Fig. 2.
Final project outline for AEP 4830. Motivation RNAs are single stranded ribonucleic acids with four bases, A-U and G-C. RNAs serve many crucial biological functions and their functionality is mainly based on the conformations and sequences. Both the sequences and native folding conformations are evolved to be a good fit to certain shape of proteins. I would like to explore more in this area and also in the Monte Carlo methods as well as other algorithms. Modeling The ssRNA can be modeled as beads on a flexible but negatively charged strand. The flexibility of its phosphate back bone will affect the base interactions and further make a difference in its conformations. However, the negative charges prevents the two parts of the strands from coming closer. Unlike proteins whose peptide bonds are non-polar, ssRNA has weaker interactions within itself, mainly by the hydrogen bonds between bases C-G and A-U. Therefore, RNA tends to form a variety of shapes such as bulges, hairpins, junctions and duplexes. I will use a a weak step function to model the interaction energies between the bases A-U and C-G. And use a decaying interaction between its back bone in 2D to model its secondary structure. Then the Monte Carlo method and Genetic Algorithm method will be applied to the system. The Monte Carlo method is based on a random search of structures and tries to find the best possible structures to meet certain requirements. However, the Genetic Alogrithm involves some amount of random search but once it identifies the better fitness, it will pass the structure features to the next generations and begin the next random search. The favorable structures will be amplified through generations by mating and some bad structure features will be eliminated. Finally, it requires some variations in generations, “mutations”, which can optimize the search process and will converge more quickly if well used. Goal The goal is to explore more on the application of both Monte Carlo method and Genetic Alogorithm on this field. I expect to see the change of the 2D RNA secondary structures for different sequences and environments (by changing the parameter of the repulsion and base-pairing interactions). Finally, we can let the RNA to interact with molecules of different shapes and binding sites to see how ssRNA adjust itself to different targets. This might give us some biological insight of the RNA dynamics.
Fit1
In this homework, we will apply linear least squares curve fitting to the monthly carbon dioxide concentration in Mauna Loa, Hawaii. We will compare two different fitting models and C++ homemade codes versus MATLAB _fit_ package. χ² CURVE FITTING The carbon dioxide raw data with month ti, carbon dioxide concentration yi and measurement uncertainty σi can be expressed as (ti, yi, σi), i = 1, 2, ...N. The fitting curve can be written as the linear combination of, say, m independent functions. f(t) = ^{m} a_kf_k(t) where the m coefficients ak are to be determined. The best fit can be converted to the problem of minimizing the residues between the data and the curve. It’s also known as the reduced chi-squared fit, aiming to minimize χ²: \chi^2 = {N-m}^{N} \left[{\sigma_i} \right]^2 A trick to reduce this problem is to differentiate χ² with respect to ak, and then we will end up with a matrix equation. ^{m} F_{lk}a_k = b_l F_{lk} = ^{N} {\sigma_i^2} ; b_l = ^{N} {\sigma_i^2} As long as we have the m × m matrix F and the m vector b, we can use Gauss-Jordan Elimination to find the coefficients a’s and the inverse matrix F−1. The uncertainties in the coefficients are the diagonal elements of F−1. THE C++ CODES Instead of using the subroutine _arrayt.h_ to generate and manipulate vectors and matrices, we too advantage of the C++ template _vector<type>_. The template not only takes care of the 1D vector, it can also be generalized to 2D matrices or multidimensional tensors. By the command _vector<vector<type>>_ we can directly assign a 2D matrix to F. The Gaussian-Jordan Elimination steps, _gaussj()_ are written by the vector template. In addition, we can also store m linearly independent functions in the vector, _functionset1(t)_ and _functionset2(t)_ so we are able to call certain functions by the index. RESULT First of all, we would like to fit the carbon dioxide data by the following fitting curve: f(t) = a_1sin(\omega t)+a_2sin(2\omega t)+a_3cos(\omega t)+a_4cos(2\omega t)+a_5+a_6t+a_7t^2 The sin() and cos() here account for the seasonal variations and the polynonials take care of the general increase. We include the first harmonics sin(2ωt) and cos(2ωt) for finer monthly changes. The ω should be equivalent to a period of 12 months. \omega = {12} The first fitting is shown in Fig. 1, with χ² = 1.09793. We can also plot the residues yi − f(ti) to see how real data deviate from the model, as shown in Fig. 2. The fitting coefficients are tabulated as follows. a₁ 2.7694 ± 0.0016 ---- --------------------------------- a₂ −0.67753 ± 0.0016 a₃ −0.374104 ± 0.0016 a₄ 0.384232 ± 0.0016 a₅ 314.14 ± 0.0064 a₆ 0.0669137 ± 3.9054 × 10−7 a₇ 8.40331 × 10−5 ± 1.0235 × 10−12
A2
FINITE DIFFERENCE ALGORITHM In order to solve time-dependent Schrodinger equation, we apply the finite difference algorithm on the time and space coordinates. First, we will divide the 1D space coordinate into N discrete points at certain time tn, ψ(xj, tn), where j = 1, 2...N. For the next time step tn + 1, we can write the equation in terms of the function evaluated at previous time tn. a_j\psi(x_{j-1},t_{n+1})+b_j\psi(x_{j},t_{n+1})+c_j\psi(x_{j+1},t_{n+1}) = d_j , where j = 1, 2...N. This is essentially a matrix equation Aψ = d where A is a tridiagonal matrix. We can solve for 1D vector ψ by converting A to a upper diagonal matrix and solve ψ from the N element to the first element. 1. Step 1. 1. c₀ ← c₀/b₀ 2. d₀ ← d₀/b₀ 2. Step 2. 1. ci ← ci/(bi − aici − 1), i = 1, 2, 3....N − 2 2. di ← (di − aidi − 1)/(bi − aici − 1), i = 1, 2, 3...N − 1 3. Step 3. 1. ψN − 1 ← dN − 1 2. ψi ← di − ciψi + 1, i = N − 2, N − 3...0 We can iterate the algorithm for each time point and solve the ψ(x, tf) at some final time tf. TIME DEPENDENT SCHRODINGER EQUATION The time-dependent Schrodinger equation of a wave function ψ in a potential V can be written as follows. i\hbar{\partial t} = -{2m}{\partial x^2}\psi(x,t)+V(x,t)\psi(x,t) For solving this equation numerically, we are given the wave function at t = 0. \psi(x,t=0) = \exp\left[ -\left( {s} \right)^2 + ik_0x \right] , where L = 1000A, s = 20A and k₀ = 1A−1. The initial wave function is shown in Fig. 1, with the real, imaginary and |ψ|² in different colors. The potential function is also given and it looks like a step function in Fig. 2. V(x) = {1+\exp[(0.5L-x)/w]} Here, V₀ = 4.05eV and w = 7A.
B1
FINITE DIFFERENCE ALGORITHM We have used Runge-Kutta method to solve ODEs with initial conditions; however, many of the physical systems cannot be described by ODEs. The partial differential equations with boundary conditions are inevitable for physical systems such as time-independent Schrodinger equations and electromagnetic fields. Those equations involve partial differential in spatial coordinates with specified boundaries and/or sources. When trying to solve such system numerically, the finite difference algorithm is a straightforward and easy-to-program method. We will use the 2D Laplace equation as an example to demonstrate the algorithm. \nabla^2\phi(x,y) = 0 The finite difference algorithm requires to mesh-grid the space of interest and the value on each grid point (i, j) represents the ϕ(x = ih₁, y = jh₂)=ϕij where h₁ and h₂ are the spacing in x and y axes. It’s always required to set up another “flag” grid which determines whether certain combination of (i, j) is on the boundary or source (electrodes). Next, based on the finite difference of adjacent values to ϕij and spacing, the second partial differential can be approximated. We can solve for ϕij in terms of its neighbors for the first iteration. ^{FD} = {4}\left( +++ \right) The flag grid is like a mask, leaving all the specified boundary values unchanged during iterations. Similar to Runge-Kutta method, we evaluate the maximum error in each iteration. \Delta = max | ^{FD}- | If the maximum error is larger than the specified tolerance Δϕij > ϵmax, we should pass the new values of ϕij to Eq. (2) and repeat the calculation until the error is below the tolerance or the number of iterations exceeds certain upper bound. The new ϕijnew can be the following form, which is the weighed average of ϕij and ϕijFD. ^{new} = (1-\omega) + \omega^{FD} where 0 < ω < 2. If 1 < ω < 2, the algorithm converges more quickly and this is known as the successive over-relaxation algorithm. The regime where 0 < ω < 1 corresponds to the under-relaxation where the solution converges relatively slowly. Here, we will solve the Laplace equation in a cylinder with three disks of inner and outer radius. Instead of utilizing Eq. (2), we will use the finite difference in cylindrical coordinates and assume azimuthal symmetry. ^{FD} = {4}\left( +++ \right)+{8j}\left( - \right) ,where z = ih and r = jh. The above equation applies to points where r ≠ 0. On the central axis where r = 0, we should have ^{FD} = {6}\left( 4++\right) In order to use finite difference algorithm, we need to set up matrices in c++. We used _arrayt.h_ to generate and do the calculations for matrices. OVER-RELAXATION First, we investigated the powerful role of ω here. We made the grid spacing h = 0.1 and error tolerance ϵmax = 2 and calculated the number of iterations for the error to go below the tolerance. The results are shown in the following table. ω 1.0 1.2 1.4 1.6 1.8 1.9 ---------------------- ----- ----- ----- ----- ----- ------ Number of iterations 655 511 431 371 359 1077 The best choice of ω seems to be around ω = 1.8 and we would use this over-relaxation value for the following calculation. SOLUTIONS For solving the electric potential inside the cylinder with three disk electrodes and boundaries, the grid spacing is the same h = 0.1 while the error tolerance is much smaller, ϵmax = 0.01. This one percent error would increase the number of iterations significantly to N = 4231. The contour plot of the numerical solution in space is shown in Fig. 1, where the red line represents the middle disk electrode held at ψ = 2000V and the black line and rectangle are the two grounded electrodes ψ = 0V. The electric potential contour is distorted by the two grounded electrodes, compressing the potential lines in the z direction. The potential at about r = 10mm is symmetric while that closer to the central axis is slightly asymmetric due to the mismatch in inner radii of the two grounded electrodes. The electric potential is mainly restricted among the disks and near the origin. Outside of those disks, the potential is negligibly zero everywhere. The configuration is seen in the electron guns of the electron microscopes to focus electrons by adjusting the fixed voltage of the center electrode.
A1
ADAPTIVE STEPSIZE RUNGE-KUTTA ALGORITHM We implemented the 4th order Runge-Kutta method to solve chaotic ODE system and it turned out to be a power solver. However, one of its drawback is that the step size is fixed. When the solution happens to be smooth and change slightly, the number of steps could be reduced for efficiency. On the other hand, if the solution is changing sharply, fixed step size could fail to account for the abrupt variations and the errors might accumulate over time. As a result, for the purpose of higher efficiency and accuracy, adaptive step size Runge-Kutta method would be a better ODE solver and could handle more equations at the same time. As mentioned by the author in the Numerical Recipes for c++, “many small steps should tiptoe through treacherous terrain while a few great strides through smooth uninteresting countryside.” The step size is controlled by the truncation errors between the fifth order and the fourth order values. If the error exceeds the tolerance, we should reduce the step size until the error is below the tolerance. On the other hand, if the error is small, it will be more efficient to increase the step size so that the error is about the same order of magnitude as the tolerance. The general form of the adaptive step size Runge-Kutta method is as follows. The tn and yn are the time and the vector containing N ODE values at the nth step. k_1 = hf(t_n, y_n)\\ k2 = hf(x_n+c_2h, y_n+a_{21}k_1)\\ ...\\ k6 = hf(t_n+c_6h, y_n+a_{61}h+...+a_{65}k5)\\ y_{n+1} = y_n+^{6} b_ik_i \\ y^*_{n+1} = y_n+^{6} b^*_ik_i where a’s, b′s, b*’s and c’s are the various constants found by Dormand and Prince and all the values are tabulated in Chapter 17.2 of Numerical Recipes. The yn + 1* is alculated to the fifth order so the error term can be written as \Delta = \left| y_{n+1} - y^*_{n+1} \right| Note that here the || is the absolute value of each element rather than the norm of the N-element vector. By picking the maximum value in the Δ vector, _err_, we can compare _err_ to our tolerance _err0_. First of all, if _err_ is larger than _err0_, we have to scale down the step size and recalculate again until the err < err0. Secondly, for scaling the step size, whichever the case is, we can always apply the same scaling factor, scale = \left( {err} \right)^{1/4} \times S The S here is the safe factor with the requirement |S|<1. We chose S = 0.8 in the program. The ${4}$ power was used instead of ${5}$ since it scales up/down the step size more efficiently and the function _sqrt_ in c++ is easier and more accurate. Thirdly, we cannot scale the step size abruptly, like from h₀ to 0.01h₀ or 50h₀. In such case, the adaptive step size Runge-Kutta method would have ignored some features in the 50h₀ step or become inefficient using 0.01h₀ step size. We have to put extreme scaling values _maxscale_ and _minscale_. The re-scaled step size would be passed to the next run and so on. Finally, in order to avoid infinite _while_ loops, we added a _counter_ to keep track of the number of loops. If _counter_ exceeds certain number (in the program 100), we will break the while loop and print the message that the err was never reduced below err0 at this step. However, this didn’t happen even though we’ve tried tens of cases with billions of steps. Usually, it only took less than 5 loops to reduce the error within our tolerance. Let’s talk about the specified error tolerance err0 and the number of steps to achieve certain final t value. The initial step size doesn’t make a difference because the step size will get scaled up/down depending upon the solution and system. Given the same number of steps, the final t value reached depends mainly on err0. The smaller the tolerance is, the smaller the step size would be for each step. Therefore, the final t would end up far below the expected value. We can either increase the number of steps or the error tolerance which is a more efficient way than the former one. In the following testing and computations, this issue took place most of the time and it’s necessary to choose a good and aacceptable err0 and number of steps to solve the ODE system. We found that err0 = 10−5 of the initial value would work well for most of the cases.