This shows you the differences between two versions of the page.

sep:courses:gp211_labs [2011/12/01 17:19]
sep:courses:gp211_labs [2015/05/27 02:06] (current)
Line 10: Line 10:
^ ^ **Name ** ^ **Description ** ^ **Due ** ^ **Download ** ^ ^ ^ **Name ** ^ **Description ** ^ **Due ** ^ **Download ** ^
-| {{sep:courses:gif:lab1.gif}}| ** 1-D Gradient Operator **\\ \\ {{sep:courses:labs211:labs:lab1:lab1.pdf|Lab1.pdf}}| The gradient of a continuous function of more than one variable is a vector quantity. The gradient of a discrete function is normally approximated by finite differences. You will make simple modifications to two Fortran90 programs in order to compute the discrete gradient vector of a topographical image of the San Francisco Bay area. You will then modify a third program to compute the gradient magnitude, which is a scalar measure of relative total steepness and answer a couple simple questions about the results.  | 9/29 |{{sep:courses:labs211:labs:lab1:lab1.tar|Lab1.tar}} | +[[http://sepwww.stanford.edu/data/media/public/sep/prof/Lab1_fortran.pdf|Lab1_fortran.pdf]]
-| {{sep:courses:gif:lab2.gif}} | ** Basic operators and adjoints **\\ \\ {{sep:courses:labs211:labs:lab2:lab2.pdf|Lab2.pdf}}| In this computer exercise you will become familiar with some useful basics. You are given a subroutine which performs causal lowcut filtering. Your task is to code the adjoint process (anticausal filtering), and then to perform the dot product test to check if you have coded the adjoint properly. Additionally, you will be asked to answer some questions and use your subroutine to roughen the Sea of Galilee image. Also, you will need to do a couple of basic UNIX and gmake tasks, which are useful for doing research and future labs. | 10/06 |{{sep:courses:labs211:labs:lab2:lab2.tar|Lab2.tar}}| +
-| {{sep:courses:gif:lab3.gif}} | ** Model fitting by least squares **\\ \\ {{sep:courses:labs211:labs:lab3:lab3.pdf|Lab3.pdf}}| In this computer exercise you are given a module which performs a simple velocity transform. First you will improve the linear operator by implementing linear interpolation instead of nearest-neighbor interpolation. In the second part of the assignment, you will apply an iterative least-square optimization for velocity transform inversion. | 10/13 | {{sep:courses:labs211:labs:lab3:lab3.tar|Lab3.tar}}| +
- +
-| {{sep:courses:gif:lab4.gif}} | **Missing Data Estimation\\ and Multiple Realizations **\\ \\ {{sep:courses:labs211:labs:lab4:lab4.pdf|Lab4.pdf}}| Geostatistics has gained much recent attention for its ability to solve missing data problems not with a single answer, but with a suite of so-called equiprobable models, or models that fit the data equally well. The geostatistical approach gives better results in fluid flow simulation (for instance), and also provides a useful tool for multiple scenario analysis. Clapp (SEP-105) showed that multiple realizations can easily be incorporated into the existing SEP framework to solving missing data problems. This lab has two parts. In the first, you will modify a conjugate gradient algorithm to allow multiple realizations, and then test it on the 1-D inverse linear interpolation problem given in Chapter 3 of the book. In the second part, you will use your new solver to tackle the ** Wells not matching the seismic map ** problem of Chapter 3. You will improve the ``extension'' of the seismic data into empty regions of the model space, first by writing a program to de-trend the seismic data with a least squares best-fitting plane, and second, by using your new solver to come up with multiple realizations of the extended seismic data. | 10/25 | {{sep:courses:labs211:labs:lab4:lab4.tar|Lab4.tar}}| +
- +
-| {{sep:courses:gif:lab5.gif}} | ** Interval\\ Velocity\\ Estimation **\\ \\ {{sep:courses:labs211:labs:lab5:lab5.pdf|Lab5.pdf}}| This lab will guide you through most of the steps - and pitfalls - of interval velocity estimation, perhaps the most important problem in exploration geophysics. You will apply the least squares velocity scan of Lab 2, and then pick RMS velocities using an autopicker and by hand. You will apply the multiple realizations concept to bound uncertainty on your picked RMS velocity. You will solve the Dix equation as a least squares problem to estimate interval velocity in 1-D and make some preliminary steps in the direction of 2-D. | 10/29 | {{sep:courses:labs211:labs:lab5:lab5.tar|Lab5.tar}} | +
- +
-| {{sep:courses:gif:lab6.gif}} | **Debluring \\ by inversion \\ with regularization**\\ \\ {{sep:courses:labs211:labs:lab6:lab6.pdf|Lab6.pdf}} | In this Lab, we test different regularization schemes for deblurring a text image with random noise. Regularization is very important in inversion when there are not enough data. Prior information can be included in the regularization to make the inversion converge fast. In this Lab, you will code the adjoint of the given operator. You will then apply two regularization operators to the linear inversion. In the final part, you will apply an edge-preserving regularization scheme to the non-linear inversion. | 11/08 | {{sep:courses:labs211:labs:lab6:lab6.tar|Lab6.tar}} | +
- +
-| {{sep:courses:gif:lab7.gif}} | ** Madagascar **\\ \\ {{sep:courses:labs211:labs:lab7:lab7.pdf|Lab7.pdf}} | The Madagascar satellite data set provides images of a spreading ridge off the coast of Madagascar. This data set has two regions: the southern half is densely sampled and the northern half is sparsely sampled. The sparsely sampled region presents a missing data problem. In this exercise, you will apply basic fitting goals to image the dense area. Then, you will be asked to fill the missing data in the sparsely sampled region. Additionally, you will use prediction-error filters estimated on the dense southern half to fill data on the sparse half. Preconditioning will be used to speed convergence. The prediction-error filters will attempt to spread the texture of the spreading ridge to the sparse tracks. | 11/15 | {{sep:courses:labs211:labs:lab7:lab7.tar|Lab7.tar}}  | +
- +
-| {{sep:courses:gif:lab8.gif}} | ** Model \\ space \\ residual **\\ \\ {{sep:courses:labs211:labs:lab8:lab8.pdf|Lab8.pdf}} | Steepest descent minimizes the residual in the data space. Alternatively, we can try to minimize the residual in the model space. This gives opportunities of spatially variable step length. | 11/21 | {{sep:courses:labs211:labs:lab8:lab8.tar|Lab8.tar}}  |+
 +[[http://sepwww.stanford.edu/data/media/sep/courses/labs210/lab1_fortran.pdf |lab1_fortran.pdf]]
 +[[http://sepwww.stanford.edu/data/media/sep/people/group-pic-2014-downsize.jpg | group4.jpg]]
\\ \\ \\ \\
{{page>share:footer&nofooter&noeditbtn}} {{page>share:footer&nofooter&noeditbtn}}
/web/html/data/attic/sep/courses/gp211_labs.1322759992.txt.gz · Last modified: 2015/05/26 22:40 (external edit)
www.chimeric.de Creative Commons License Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0