Download presentation
Presentation is loading. Please wait.
Published byHilda Matthews Modified over 7 years ago
1
Heterogeneous Interactions in the Interstellar Medium:
Methods and Applications of the AstroBEAR code by Kristopher Yirak Advisor: Professor Adam Frank Computational Astrophysics Group Department of Physics & Astronomy School of Arts and Science University of Rochester, Rochester, NY Los Alamos National Lab August 19th, 2010 Carina nebula, NGC (April 24, 2007) This image is from a mosaic of the Carina Nebula assembled from 48 frames taken with the Hubble Space Telescope's Advanced Camera for Surveys. The Hubble images were taken in the light of neutral hydrogen. Color information was added with data taken at the Cerro Tololo Inter-American Observatory in Chile. Red corresponds to sulfur, green to hydrogen, and blue to oxygen emission. Nathan Smith University of California, Berkeley, Berkeley, Calif., (phone) , ( ) ~30” separation of features, whole image is ~75x~60 Background: HST image of a star-forming region in NGC3372, the Carina Nebula
2
Motivating my talk: astrophysics & numerics
‡ My work has consisted in equal parts of numerics and code development, and application of that code to astrophysical problems. ‡ My work's application focuses on phenomena related to star formation. In order to help you appreciate this environment and its beauty, I will introduce it. All of my work relies on direct numerical simulations. In order to help you appreciate this aspect, I will introduce our numerical methods and will discuss two selected implementaions of mine in the AstroBEAR code. Cunningham, A.J., PhD thesis, 2008 After the astrophysical introduction, and the discussion of numerics, we will then consider two applications revolving around star formation and related processes, ending with a question regarding the fundamental role of resolution in multi-physics simulations.
3
Astrophysical jets are widespread and important
Astrophysical jets exist on a wide range of physical scales, from star-forming regions' Herbig-Haro (HH) objects to black-hole-driven Active Galactic Nuclei (AGN) relativistic jets. AGN Obs.: Bluish tint is synchrotron from e- along field lines. The data were collected with Hubble's Wide Field Planetary Camera 2 in 1998 by J.A. Biretta, W.B. Sparks, F.D. Macchetto, and E.S. Perlman (STScI). The Hubble Heritage team combined these exposures of ultraviolet, blue, green, and infrared light in order to create this color image. AGN Graphic: ??? YSO Obs.: J. Hester, ASU, 1995 YSO Graphic: ??? HH-34 MM87 jet
4
HH objects play an important role in star-forming regions
HH Objects were discovered independently in & 1952 by George Herbig and Guillermo Haro, respectively. From Haro 1952: they were interpreted as luminous nebulae perhaps associated with “faint, very blue, hot star[s].” Now there are well over 400 identified HH objects*; they are believed to play an important role in large-scale dynamics in molecular clouds. Similar morphology and launching, though very different in scale (AGN ~kpc) and speed (AGN relativistic). *Reipurth, 1999 Carroll, J., 2010
5
HH objects are interesting, important, and ideal for study
l~1 pc–10 pc (2e5 AU–2e6 AU) r~100 AU HH111 1983 Mundt & Fried: first HH jets observed HH objects are sequences of emission knots with aligned velocity vectors, typically culminating in a bow shock located 100s of AUs to parsecs or tens of parsecs from the originating YSO. The knots are more or less regularly spaced. The emission in H-α, [SII], etc. is assumed to come from shock heating.
6
HH objects come from stars, from gas and dust
From Bachiller 1996 Ann. Rev. HH objects are associated with Young Stellar Objects (YSOs). They are launched as a result of interplay between the forming star, disc of accreting material, and threaded magnetic fields.
7
Strucuture and physical parameters of HH objects
Direct observation Doppler shifts, proper motions Emission lines Emission line ratios Emission line ratios parallel to shocks (Hartigan 2007) source
8
transition So that's what I want to study, now to say something about how to go about studying it.
9
The equations describing fluid flow
“Principes generaux du mouvement des fluides,” published in Mémoires de l'Academie des Sciences de Berlin in 1757 Conservation of mass: (ρ: mass, v: velocity vector) Conservation of momentum: (p: pressure) These rep. 3 equations in 4 unknowns; equation of state relates state variables and “closes”--fully-defines-- system. Leonhard Euler ( ), Swiss A 1753 portrait by Swiss painter Emanuel Handmann. Conservation of energy: (E = kinetic + internal energy)
10
Computers are discrete, requiring discrete equations to solve at a finite number of points
The equations are discretized by approximating derivatives (Taylor expansion), e.g. The equations are solved at discrete points in space. u: fluid variable f(u): flux P: piecewise polynomial reconstruction equations continuous owing to fact that fluids can be considered continuous materials in macroscopic view Cunningham, A.J., PhD thesis, 2008
11
The case for Adaptive Mesh Refinement (AMR)
In 2D, doubling the resolution increases the computational time by a factor of 8*. In 3D, it is a factor of 16*. Simulations quickly become prohibitively large. AMR was long desired; the main stumbling block (pioneered by Berger & Oliger 1984) was the construction of conservative inter-level fluxes. * 1) Doubling resolution in dimension D results in 2D more cells. 2) Half-cell-size reduces stable time step by half (v Δt/Δx≡C < 1/D!). ⇒ An overall slowdown of 2D+1.
12
Yirak, 2010
13
Astrophysics with AMR: AstroBEAR
AstroBEAR is the result of an active collaboration, with primary work undertaken by graduate students at the UR. The code is “a parallelized astrophysics AMR code with capabilities of carrying out hydro- or magnetohydrodynamics (MHD) simulations in two or three dimensions.” The group recognizes that the variety of astrophysical problems requires flexibility in numerics. Hence, the code is modular, providing several integration scheme options. Development continues along the complementary lines of numerics development and physics development. There exists a website & wiki for the code, with the latter being very important and novel in the community, serving both as living documentation and a repository of institutional memory.
14
The code produces results: 30+ publications since ~2002
* The Interaction Between a Pulsed Astrophysical Jet and Small-Scale Heterogeneous Media (2007) Yirak, Kristopher; Frank, Adam; Cunningham, Andrew; Mitran, Sorin * Hypersonic swizzle sticks: jets, fossil cavities and turbulence in molecular clouds (2007) Cunningham, Andrew J.; Frank, Adam; Blackman, Eric G.; Quillen, Alice * Outflow Evolution in Turbulent Clouds (2006) Cunningham, Andrew; Frank, A.; Quillen, A. C.; Blackman, E. G. * Outflow-driven Cavities: Numerical Simulations of Intermediaries of Protostellar Turbulence (2006) Cunningham, Andrew J.; Frank, Adam; Quillen, Alice C.; Blackman, Eric G. * Protostellar Jet Collisions Reduce the Efficiency of Outflow-Driven Turbulence in Molecular Clouds (2006) Cunningham, Andrew J.; Frank, Adam; Blackman, Eric G. * A Numerical Study of the Hydrodynamic Interaction of YSO Jets (2005) Cunningham, A. J.; Frank, A.; Blackman, E. G. * A Numerical Study of Outflow-blown Cavities (2005) Cunningham, A. J.; Thorndike, S. L.; Frank, A.; Quillen, A. C.; Blackman, E. G. * A HED Laboratory Astrophysics Testbed Comes of Age: JET Deflection via Cross Winds (2005) Frank, A.; Blackman, E. G.; Cunningham, A.; Lebedev, S. V.; Ampleford, D.; Ciardi, A.; Bland, S. N.; Chittenden, J. P.; Haines, M. G. * Evolution and Fragmentation of Wide-Angle Wind Driven Molecular Outflows (2005) Cunningham, Andrew; Frank, Adam; Varnière, Peggy; Poludnenko, Alexei; Mitran, Sorin; Hartmann, Lee * Wide-Angle Wind-driven Bipolar Outflows: High-Resolution Models with Application to Source I of the Becklin-Neugebauer/Kleinmann-Low OMC-I Region (2005) Cunningham, Andrew; Frank, Adam; Hartmann, Lee * Turbulence Driven by Outflow-blown Cavities in the Molecular Cloud of NGC 1333 (2005) Quillen, Alice C.; Thorndike, Stephen L.; Cunningham, Andy; Frank, Adam; Gutermuth, Robert A.; Blackman, Eric G.; Pipher, Judith L.; Ridge, Naomi * Jet Deflection via Crosswinds: Laboratory Astrophysical Studies (2004) Lebedev, S. V.; Ampleford, D.; Ciardi, A.; Bland, S. N.; Chittenden, J. P.; Haines, M. G.; Frank, A.; Blackman, E. G.; Cunningham, A. * Strings in the eta Carinae Nebula: Hypersonic Radiative Cosmic Bullets (2004) Poludnenko, A. Y.; Frank, A.; Mitran, S. * A Laboratory Investigation of Supersonic Clumpy Flows: Experimental Design and Theoretical Analysis (2004) Poludnenko, A. Y.; Dannenberg, K. K.; Drake, R. P.; Frank, A.; Knauer, J.; Meyerhofer, D. D.; Furnish, M.; Asay, J. R.; Mitran, S. * Dynamics and properties of astrophysical inhomogeneous media (2004) Poludnenko, Alexei Y. * Clumpy Flows in Protoplanetary and Planetary Nebulae (2004) * Stellar Outflows with New Tools: Advanced Simulations and Laboratory Experiments (2003) Frank, A.; Poludnenko, A.; Gardiner, T. A.; Lebedev, S. V.; Drake, R. P. * Hydrodynamics of Clumpy Flows: Application to the PNe (2003) Poludnenko, A. Y.; Frank, A.; Blackman, E. G. * Hypersonic Swizzle Sticks: Interacting YSO Jets (2002) Frank, A.; Cunningham, A.; Powers, A.; Poludnenko, A. * The Propagation of Radiative Interstellar Bullets: The Strings of eta Car (2002) Poludnenko, A. Y.; Mitran, S.; Mellema, G.; Frank, A. * Hydrodynamic Interaction of Strong Shocks with Inhomogeneous Media. I. Adiabatic Case (2002) * Strong Shocks and Supersonic Winds in Inhomogeneous Media (2002) * Hydrodynamic Interaction of Shock Waves with Inhomogeneous Media (2001) * Design of Experiments to Simulate Shock-Wave Penetration of Clumpy Molecular Clouds (2001) Dannenberg, K. K.; Drake, R. P.; Furnish, M. D.; Knudson, J. D.; Asay, J. R.; Hebron, D. E.; Schroen-Carey, D.; Poludnenko, A.; Frank, A.; Arnett, D. * AMR Simulations of Winds in Clumpy Flows (2000) Poludnenko, A.; Frank, A.; LeVeque, R.; Berger, M.
15
Code development is fruitful and ongoing
16
While grads come and go, their impact allows successors to build on their work
17
Personal Highlights Timeline
18
Motivation for one physics code improvement: What to do when cooling clouds collapse?
The physical process of optically-thin radiative cooling may allow shocked, dense clumps to collapse to the point where they become gravitationally unstable*. Such collapsing clumps may therefore be progenitors of localized star formation. Accurately capturing this evolution** requires the inclusion of self- gravity, which is represented by a fundamentally different type of equation, * Characterized by the Jeans length: ** Accurate treatment imposes a Jeans-length-motivated constraint on the resolution, i.e. Truelove 1998
19
HYPRE: what to do when cooling clouds collapse
HYPRE is a library of linear system solvers written in C developed at LLNL for massively parallel applications. HYPRE has been implemented in the fixed-grid version of AstroBEAR. HYPRE is presently being implemented in the AMR version of AstroBEAR collaboratively with many members of the group working together. A poster, available as a PDF on my website, has more details.
20
Rethinking the AMR algorithm: motivation for one numerics code development
“Old” (typical) AMR algorithm: Advance grids on coarse level, refine, advance, refine, etc. Move through the full timestep sequentially, level-by-level. 3 2 1 AMR level Time --->
21
Rethinking the AMR algorithm: motivation for one numerics code development
“Old” (typical) AMR algorithm: Advance grids on coarse level, refine, advance, refine, etc. Move through the full timestep sequentially, level-by-level. Every 2 time steps, communicate inter-level fluxes before continuing 3 2 1 1 AMR level Time --->
22
Rethinking the AMR algorithm: motivation for one numerics code development
“Old” (typical) AMR algorithm: Advance grids on coarse level, refine, advance, refine, etc. Move through the full timestep sequentially, level-by-level. Every 2 time steps, communicate inter-level fluxes before continuing 3 2 1 2 3 4 1 AMR level Time --->
23
The algorithm essentially emphasizes the furthest-behind timestep
“Old” (typical) AMR algorithm: Advance grids on coarse level, refine, advance, refine, etc. Move through the full timestep sequentially, level-by-level. Every 2 time steps, communicate inter-level fluxes before continuing 3 2 1 2 3 4 5 AMR level Time --->
24
The algorithm essentially emphasizes the furthest-behind timestep
“Old” (typical) AMR algorithm: Advance grids on coarse level, refine, advance, refine, etc. Move through the full timestep sequentially, level-by-level. Every 2 time steps, communicate inter-level fluxes before continuing 3 2 1 4 5 3 6 2 1 AMR level Time --->
25
The algorithm essentially emphasizes the furthest-behind timestep
“Old” (typical) AMR algorithm: Advance grids on coarse level, refine, advance, refine, etc. Move through the full timestep sequentially, level-by-level. Every 2 time steps, communicate inter-level fluxes before continuing 3 2 1 4 5 7 8 3 6 2 1 1 AMR level Time --->
26
The result is not optimal.
“Old” (typical) AMR algorithm: Advance grids on coarse level, refine, advance, refine, etc. Move through the full timestep sequentially, level-by-level. Every 2 time steps, communicate inter-level fluxes before continuing 3 2 1 4 5 7 8 11 12 14 15 3 6 10 13 2 9 1 1 AMR level Time ---> 15 advance stages, 29 stages overall
27
A simplified figure shows the staggered timesteps
“Old” (typical) AMR algorithm: Advance grids on coarse level, refine, advance, refine, etc. Move through the full timestep sequentially, level-by-level. Every 2 time steps, communicate inter-level fluxes before continuing 3 2 1 1 1 AMR level Time --->
28
The “problem”-- this algorithm is not balanced very well across processors
“Old” (typical) AMR algorithm: You typically have More grids than processors More grids in some areas than others Higher AMR levels in some areas than others 1 2 3 4 AMR Level
29
An improved algorithm: emphasize the furthest- ahead time steps
“New” AMR algorithm: Refine grid hierarchy from coarsest to finest all at once instead of sequentially Advance finest level grids first Then advance pairs of levels, thereby keeping workers busy The finest level grids are always being advanced 3 2 1 1 AMR level Time --->
30
An improved algorithm: emphasize the furthest- ahead time steps
“New” AMR algorithm: Refine grid hierarchy from coarsest to finest all at once instead of sequentially Advance finest level grids first Then advance pairs of levels, thereby keeping workers busy The finest level grids are always being advanced 3 2 1 1 2 2 AMR level Time --->
31
An improved algorithm: emphasize the furthest- ahead time steps
“New” AMR algorithm: Refine grid hierarchy from coarsest to finest all at once instead of sequentially Advance finest level grids first Then advance pairs of levels, thereby keeping workers busy The finest level grids are always being advanced 3 2 1 1 2 3 2 3 AMR level Time --->
32
An improved algorithm: emphasize the furthest- ahead time steps
“New” AMR algorithm: Refine grid hierarchy from coarsest to finest all at once instead of sequentially Advance finest level grids first Then advance pairs of levels, thereby keeping workers busy The finest level grids are always being advanced 3 2 1 1 2 3 4 2 4 3 AMR level Time --->
33
An improved algorithm: emphasize the furthest- ahead time steps
“New” AMR algorithm: Refine grid hierarchy from coarsest to finest all at once instead of sequentially Advance finest level grids first Then advance pairs of levels, thereby keeping workers busy The finest level grids are always being advanced 3 2 1 1 2 3 4 5 2 4 3 5 AMR level Time --->
34
This parallelization of stages results in fewer communication and advance stages
“New” AMR algorithm: Refine grid hierarchy from coarsest to finest all at once instead of sequentially Advance finest level grids first Then advance pairs of levels, thereby keeping workers busy The finest level grids are always being advanced 3 2 1 1 2 3 4 5 6 7 8 2 4 6 8 3 7 5 AMR level Time ---> 8 advance stages, 22 stages overall
35
The simplified figure demonstrates the parallel advances
“New” AMR algorithm: Refine grid hierarchy from coarsest to finest all at once instead of sequentially Advance finest level grids first Then advance pairs of levels, thereby keeping workers busy The finest level grids are always being advanced 3 2 1 AMR level Time --->
36
The simplified figure demonstrates the parallel advances
“New” AMR algorithm: Refine grid hierarchy from coarsest to finest all at once instead of sequentially Advance finest level grids first Then advance pairs of levels, thereby keeping workers busy The finest level grids are always being advanced 3 2 1 AMR level Time ---> The advantage: the algorithm is better balanced among processors by allowing parallel advancement of AMR levels
37
transition slide So, we've seen numerics. How about applications?
38
Observations lead to a variety of interpretation
t sin t r v First models of the bow shocks were ballistic: either a stationary clump overrun by a wind, or a clump moving into ambient material. The aligned knots in HH objects leads to a YSO “jet” model, in which material flows from a launching engine. Jet models include a range of tweaks: precession, varying opening angle, pulsation, velocity profile, etc. Pulsed jets are a popularly used model.
39
The pulsed jet cannot reproduce all observed features
HH111 YSO jets for example feature sub-radial structure (off-axis knots; “spur” shocks).
40
What if the pulsed jet successes come from it being a limiting case of a more general model: the “clumped” jet? [Yirak et al. (2009); Yirak et al., in prep] = ? Treat the jet beam as a stream in which individual spherical “clumps” are located. The clumps have a range of densities ρ (ρC>ρJ), velocities with respect to the beam Δv, sizes r (rC<rJ), and radial locations within the beam. This allows a parameter space to be explored, which may recover a range of reminiscent morphologies.
41
Example 1: If Δv is large, the clumps may disrupt the beam
Bally 2002
42
Example 2: When the jet beam is removed, the clump-clump interactions are (even more) important
43
A natural corollary: the evolution of radiatively cooling shocked clumps
[Yirak et al. 2010, to appear in Astrophysical Journal 10/1/2010] adiabatic cooling In the adiabatic limit, shocked clumps have been extensively studied (Stone, Norman 1992; Klein et al. 1994; Shin et al. 2008; many, many others). Less extensive when radiative cooling is included (Mellema et al ; Fragile et al. 2004; Orlando et al. 2008).
44
Cooling can have a significant effect on the shocked clump evolution
initial cloud boundary The removal of thermal support potentially allows total collapse down to grid scales, instead of the collapse/re-expansion/mixing that is observed in adiabatic simulations. These results first reported in Mellema et al Fragile et al extended that work and saw similar behavior.
45
Why: cooling reduces the size of interaction regions in multiple-shock systems
⇒ Cooling reduces shock speeds (via effectively lowering γ). For clumps, this implies smaller bow shock stand-off distances, as well as longer cloud-crushing times (tCC). The limiting case is isothermal ( vPS = vS for M ≫ 1) (e.g., Dyson & Williams 1980). The effect is determined by the ratio of cloud radius to cooling length, χCOOL ≡ rC/LCOOL. For numerics, this implies a limiting resolution, below which important physics will not be fully resolved. Convergence studies of adiabatic clumps propose cells/rC is sufficient to resolve hydrodynamics. No convergence study of cooling clumps previously in the literature.
46
A convergence study from LCOOL/Δx ≪ 1, to LCOOL/Δx ~ 16 at bow shock illustrates the importance of resolving the cooling length 2.5D simulations w/ cooling. Global evolution depends on the effective Reynolds number because of KH and RT growth at the clump's leading edge (contact discontinuity). KH and RT growth times decrease with cooling, due to larger velocity shear & density stratification at the contact. Measuring convergence using time evolution of global quantities is accordingly restricted to earlier in the simulation. This implies a (higher) resolution requirement than for adiabatic clumps ( cells/rC) , dependent on χCOOL. The problem is ideal for investigation with explicit viscosity (e.g. Pittard et al. 2009) to eliminate dependence on the numerical Reynolds number.
47
The results extend to 3D (at 192/rC)
48
Keeping χCOOL in mind, it will be interesting to revisit previous cooling clump results
200 cells/rC 3,016 cells/rC initial cloud boundary Simulation slated to run on 1,024 processors on Bluegene/P system at the Center for Research Computing at UR. Same physical parameters as Fragile et al. 2004, but with much higher resolution, so that LCOOL/Δx ~ 1. Cf. Klein Woods 1998: Nonlinear Thin Shell Instability requires higher resolution in cooling & isothermal initial cloud boundary Yirak 2010 (unpublished)
49
In conclusion, The parallel, AMR, HD/MHD code AstroBEAR benefits from ongoing development aimed at improving both its algorithms and its capabilities. My own contributions to the code base have been consistent and important. I have undertaken development addressing both numerics and physics aspects of the code. Instantiating clumps in astrophysical jets themselves provides a natural mechanism for reproducing the complex morphology observed in HH objects. Finally, numerical study of the fundamental problem of cooling shocked clumps requires careful evaluation of numerical resolution. Either criteria based on the cooling length itself, or other careful arguments, should be adopted.
50
Ongoing and future work
Debris fields: 3D + B fields; multiclumps paper Clumped jet: B fields again (cf. magnetized single-clumps), chemical compositions, emission characteristics Cooling and clumps: Current cooling regime study; clump-clump collisions; lab astro AstroBEAR: HYPRE, parallel, NewNewAMR, bluegene Artist's depiction of AstroBEAR with graduate student.
51
Laboratory experiments: an exciting testbed for astrophysics
In the lab a supersonic, radiatively cooling jet surrounded by a magnetized cavity is produced using a wire-array z-pinch assembly. After its launching, the jet goes unstable, resulting in a collection of discrete denser regions, all of which continue to propagate in the jet direction. After emerging from the cavity, however, the transported magnetic field is expected to diffuse on a timescale comparable to the jet dynamical timescale (Ciardi et al. 2007). Nonetheless, the jet remains collimated. Ciardi et al. 2007
52
Thank you. Thanks to past and present group members:
PI: Adam Frank Alexei Poludnenko (Naval Lab) Andrew Cunningham (LLNL) Brandon Shroyer Christina Haig (UNC) Ed Schroeder (LLE) Jonathan Carroll Martin Huarte-Espinosa Matt Noyes Peggy Varniere (U. Paris 7) Sean Tanny (Rice U.) Shule Li Tim Dennis Thanks to Spitzer, NSF, STSci, and the University of Rochester Laboratory for Laser Energetics (LLE) who have provided me with a Frank J. Horton Fellowship (2004-). Thank you to LANL and Melissa Douglas for giving me the opportunity to speak today.
53
Extra Slides
54
Reexamining the pulsed jet: what models exist?
[Yirak et al. (2009); Yirak et al., in prep] Emission comes from ejecta steady jet KH instabilities HD – pros: direct from theory; cons: incorrect knot spacing KH instabilities MHD – pros: (same) but w/ correct knot spacing; cons: requires B field current-driven instabilities – pros: (same); cons: requires B field (toroidal) variable jet shock-steepening, “internal working surfaces” – pros: leading explanation, can extrapolate 'velocity histories,' explain origin of shocks; cons: no obvious explanation for sub radial or non axial structure “interstellar bullet” – pros: freedom to place bullets as you wish; cons: bullets break up Emission comes from ambient “shocked cloudlet” – pros: same as with bullets; cons: reverse-facing bow shock t v
55
Considering algorithmic difficulty, is AMR really worth it?
As long as the adaptive mesh is a small fraction of the entire domain, it is. However, once the adaptive mesh occupies much of the physical space. It is better to revert to a “fixed grid” calculation. In particular, if an adaptive mesh covers entire domain, then the “waste” may be considerable: } 2D Instead of slowdown of 8x or 16x, the slowdown is only a few percent. Hence, when the filling fraction is small, the extra cost for increased resolution is a few percent, compared to orders of magnitude.
56
Temperature profiles: bow shock
57
Temperature profiles: transmitted shock
58
Temperature profiles: transmitted shock (Fragile 2004 parameters)
59
Temperature profiles: transmitted shock (Fragile 2004 parameters)
60
At the highest resolution, 1,536 cells per clump radius, complex evolution is observed
Yirak et al. 2010, accepted ApJ
61
More of NGC3372 (because it's pretty)
~15pc wide, mosaic of HST images.
62
Using a pulsed jet, a series of sine modes is invoked to explain knot spacing
? Pat Hartigan's webpage Raga et al. define a “dynamical time” which they in turn use to ascribe a 2-mode launching profile to the object: 𝑡𝑥= −𝑥 𝑣𝑥 𝑣 𝑗 = 𝑣 0 + 𝑣 1 sin 2π𝑡 τ 1 + φ 1 + 𝑣 2 sin 2π𝑡 τ 2 + φ 2 Raga et al., 2002, A&A, 395, 647
63
Jets through debris [Yirak et al., 2010] Premise: The interstellar medium (ISM) is not smooth; smooth structures are susceptible to instabilities*. How is an astrophysical jet affected by this heterogeneity? Methodology: Does the propagation depend more on the number of clumps or their density? *Consider Rayleigh-Taylor instability: Yirak 2008 (unpublished)
64
images & movies [movie] Velocity vectors indicate jet deflection.
65
The clumps can divert and disrupt the jet
Run D: few, heaviest clumps Run A: jet-only Run B: light clumps Run E: heaviest clumps Run C: heavier clumps Run F: many, heaviest clumps
66
The crossing time relates to the dynamic filling fraction
Nc: number of clumps Aa: ambient area tend ∝ fd2.02
67
Domain averages do not reproduce results well
Comparison to average-domain calculation*: the closest agreement is the run with highest filling fraction. Domain-averaged analysis is not a fruitful investigation tool. Run tend, sim tend, average rel. error A % B % C % D % E % F % *Cf. Poludnenko, 2002, § 3.3
68
Treating host as superposition fails, as well
Can we treat the system as a superposition of single-clumps? No. Based on a treatment of vorticity generation, the range of generated vorticity varies much less than would be expected. The reason relates to the critical clump separation of Poludnenko (2002): the distribution of clumps falls into a regime where they are expected to influence each other's dynamics, so a superposition is not appropriate. Thus, we see that understanding the kinematics and morphology of such systems requires further investigation of systems of clumps along the lines here, as the system does not admit the assumptions of averages or superpositions.
69
(Transition slide, for my eyes only)
We have seen the effects that heterogeneity has externally. We used a pulsed jet, which is a common model. [show images of other models] [image of clumpy observation] What if we bring heterogeneity into the jet itself?
70
References Carroll, J., Frank, A., & Blackman, E., 2010, submitted to ApJ Dyson, J. & Wiliams, D. 1980, The Physics of the Interstellar Medium, Taylor & Francis Fragile, C.P., Murray, S.D., Anninos, P., & van Breugel, W. 2004, ApJ, 604, 74 Klein, R.I., McKee C.F., & Colella, P. 1994, ApJ, 420, 213 Klein, R.I., & Woods, D.T. 1998, ApJ, 497, 777 Mellema, G., Kurk, J.D., & Rottgering, H.J.A. 2002, A&A, 395, L13 Orlando, S., Peres, G., Reale, F., Bocchino, F., Rosner, R., Plewa, T., & Siegel, A. 2008, A&A, 444, 505 Pittard, J.M., Falle, S.A.E.G., Hartquist, T.W., & Dyson, J.E. 2009, MNRAS, 394, 1351 Shin, M.S., Stone, J.M., & Snyder, G.F. 2008, ApJ, 680, 336 Stone, J.M. & Norman, M.L. 1992, ApJ, 612, 319 Yirak, K., Frank, A., Cunningham, A., & Mitran, S. 2009, 695, 999
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.