...
 
Commits (21)
......@@ -2,6 +2,7 @@
*.pyc
# emacs backup files
*~
\#*\#
# python egg info
*.egg-info/*
.cache/*
......
# cosmic_dustbox
Toolbox for cosmic dust.
# A toolbox for cosmic dust.
In 1930, after confirming that distant stars in the Milky Way are dimmed more
strongly than one would expect from geometry, Trumpler conjectured the
existence of "dust particles of various sizes" in the interstellar space of our
Galaxy. Countless observations have since then confirmed this and the
"interstellar extinction" remains one of the observational pillars of the study
of cosmic dust. It was later discovered that there is dust also in
interplanetary space and that it is important for planet formation, there is
dust in other galaxies, even far away galaxies, and maybe there also is dust in
intergalactic space. It was confirmed that dust grains do indeed come in
different sizes and, moreover, in different shapes and different chemical
compositions. In short, while many things have been understood, it also became
apparent that cosmic dust is very complicated. The goal of this software
package is to make work with models of cosmic dust on the computer easier while
still providing control over all the involved intricacies. An ambitious goal
for sure, but we'll try our best.
To introduce the objects central to how we describe cosmic dust, assume we are
interested in the dust in a certain environment. Assume then that we cut some
volume out of said environment. The volume should, of course, be representative
the environment. So if we are interested in the dust in the Solar System as a
whole, a cubic meter of space somewhere between Earth and Mars will be too
small and a box the size of the Milky Way will be too large. Suppose we
collected all of the dust grains contained in this volume. We would like to be
able to describe this collection with a reasonable degree of accuracy without
having to store information on every single grain. If we investigated the
properties of single grains, at first the diversity would probably be
overwhelming since hardly any two grains would be exactly alike. But after
looking at enough grains, we would start discovering similarities. Grains that
have similar structure, grains that look like larger or smaller versions of
other grains, grains that have similar chemical composition and so on. So if we
are fine with some level of abstraction and generalization, our grain
collection could be represented like this:
![](figs/grain-pop.png)
with color-coded chemical composition. How do we go about categorizing these
grains so that we can think in these categories without worrying about
individual grains? Two obvious properties to group by are the chemical
composition and the shape:
![](figs/grain-species.png)
The "blueprint grain", which we can rescale to obtain any of the grains in one
of the groups, is referred to as a grain species. So one of our groups is
simply a grain species combined with a size distribution and we call it a grain
population. If we think back to the abstract representation of our collection
of grains, it is not very different from one of the groups. It simply has more
than one grain species, each with its associated size distribution. So to keep
it simple, we will also call this a grain population.
So, to recap, what we need to describe a collection of grains are:
- grain species
- size distributions
- grain populations
This approach can be refined as needed by simply splitting grain species. In
the extreme, we would have one species per grain and each species would have a
delta-peak size distribution and we would be back to the overwhelming diversity
we started out from.
# Literature
"Physics of the Interstellar and Intergalactic Medium" by B. Draine provides
a good introduction (and more than that) to cosmic dust.
"""
Toolbox for cosmic dust species and mixtures and their properties.
"""
from . import sdist
from . import crefin
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# Table 1 of 2001ApJ...548..296W which gives size distribution parameters for different lines of sight and assumptions.
# R_V 10^5 b_C b_C Case \alpha_g \beta_g a_{t,g} [m] a_{c,g} [m] C_g \alpha_s \beta_s a_{t,s} [m] a_{c,s} [m] C_s
3.1 0.0 0.0 A -2.25 -0.0648 0.00000000745 0.000000606 9.94e-11 -1.48 -9.34 0.000000172 0.0000001 1.02e-12
3.1 1.0 0.00001 A -2.17 -0.0382 0.00000000373 0.000000586 3.79e-10 -1.46 -10.3 0.000000174 0.0000001 1.09e-12
3.1 2.0 0.00002 A -2.04 -0.111 0.00000000828 0.000000543 5.57e-11 -1.43 -11.7 0.000000173 0.0000001 1.27e-12
3.1 3.0 0.00003 A -1.91 -0.125 0.00000000837 0.000000499 4.15e-11 -1.41 -11.5 0.000000171 0.0000001 1.33e-12
3.1 4.0 0.00004 A -1.84 -0.132 0.00000000898 0.000000489 2.90e-11 -2.1 -0.114 0.000000169 0.0000001 1.26e-13
3.1 5.0 0.00005 A -1.72 -0.322 0.0000000254 0.000000438 3.20e-12 -2.1 -0.0407 0.000000166 0.0000001 1.27e-13
3.1 6.0 0.00006 A -1.54 -0.165 0.0000000107 0.000000428 9.99e-12 -2.21 0.3 0.000000164 0.0000001 1.0e-13
4.0 0.0 0.0 A -2.26 -0.199 0.0000000241 0.000000861 5.47e-12 -2.03 0.668 0.000000189 0.0000001 5.2e-14
4.0 1.0 0.00001 A -2.16 -0.0862 0.00000000867 0.000000803 4.58e-11 -2.05 0.832 0.000000188 0.0000001 4.81e-14
4.0 2.0 0.00002 A -2.01 -0.0973 0.00000000811 0.000000696 3.96e-11 -2.06 0.995 0.000000185 0.0000001 4.7e-14
4.0 3.0 0.00003 A -1.83 -0.175 0.0000000117 0.000000604 1.42e-11 -2.08 1.29 0.000000184 0.0000001 4.26e-14
4.0 4.0 0.00004 A -1.64 -0.247 0.0000000152 0.000000536 5.83e-12 -2.09 1.58 0.000000183 0.0000001 3.94e-14
5.5 0.0 0.0 A -2.35 -0.668 0.000000148 0.00000196 4.82e-14 -1.57 1.1 0.000000198 0.0000001 4.24e-14
5.5 1.0 0.00001 A -2.12 -0.67 0.0000000686 0.00000135 3.65e-13 -1.57 1.25 0.000000197 0.0000001 4.00e-14
5.5 2.0 0.00002 A -1.94 -0.853 0.0000000786 0.000000921 2.57e-13 -1.55 1.33 0.000000195 0.0000001 4.05e-14
5.5 3.0 0.00003 A -1.61 -0.722 0.0000000418 0.00000072 7.58e-13 -1.59 2.12 0.000000193 0.0000001 3.2e-14
import astropy.units as _u
import numpy as _numpy
import warnings as _warnings
import os as _os
from scipy.interpolate import interp1d
from scipy import integrate
import sys
Path = _os.path.dirname(_os.path.dirname(_os.path.realpath(__file__))) + \
'//Solver//'
sys.path.insert(0, Path)
import sdist as sd
import GrainSpecies as gs
###############################################################################
###############################################################################
class Dp(object):
def __init__(self, en):
self.en = en
return
def Sigma_abs_E(en):
sizes = _np.logspace(self.sizeMin, self.sizeMax,
num=10000)
def func1(sizes):
if issubclass(self.__class__, sd.SizeDist):
dist_gr = self.__class__(sizes).sdist_gr
dist_sil = self.__class__(sizes).sdist_sil
return dist_gr, dist_sil
def func2(sizes):
sigma_cr = gs.SpGraphite.Sigma_abs(en, sizes)[0]
sigma_sl = gs.SpAsil.Sigma_abs(en, sizes)[0]
return sigma_cr, sigma_sl
f1_gr = interp1d(sizes, func1(sizes)[0], kind='linear')
f2_sl = interp1d(sizes, func1(sizes)[1], kind='linear')
f3_gr = interp1d(sizes, func2(sizes)[0], kind='linear')
f4_sl = interp1d(sizes, func2(sizes)[1], kind='linear')
Integ1 =integrate.quad(f1_gr(Sizes) * f3_gr(Sizes),self.sizeMin,
self.sizeMax)
Integ2 = integrate.quad(f1_gr(Sizes), self.sizeMin,self.sizeMax)
Integ3 = integrate.quad(f2_sl(Sizes) * f4_sl(Sizes),self.sizeMin,
self.sizeMax)
Integ4 = integrate.quad(f2_sl(Sizes), self.sizeMin,self.sizeMax)
sigma_abs_1 = Integ1/Integ2
sigma_abs_2 = Integ3/Integ4
sigma_abs_E = sigma_abs_1 + sigma_abs_2
return sigma_abs_E
import astropy.units as _u
import numpy as _np
import warnings as _warnings
import os as _os
import miepython as miepy
from scipy import interpolate as _interp
import sys
Path = _os.path.dirname(_os.path.dirname(_os.path.realpath(__file__))) + \
'//Solver//'
sys.path.insert(0, Path)
import Solver as sv
###############################################################################
import numpy as _np
import astropy.units as u
###############################################################################
class GrainSpecies(object):
"""
Grain species base class.
"""
def __init__(self):
return
def scatt(self, sizes, wave, **kwargs):
return self._scatt(
sizes.to(_u.micron),
wave.to(_u.micron, equivalencies=_u.spectral()),
**kwargs
)
class SphericalGS(GrainSpecies):
"""
Base class for spherical grain species.
"""
def __init__(self):
return
def volume(self, sizes):
return 4*_np.pi/3 * size**3
def geomCS(self, sizes):
return _np.pi * sizes**2
def surfaceArea(self, sizes):
return 4*_np.pi * sizes**2
class HomSphericalGS(SphericalGS):
"""
Base class for homogeneous spherical grain species.
Represents grain species for which Mie theory and/or anomalous diffraction
theory can be used to compute optical properties.
"""
def __init__(self):
return
def mass(self, sizes):
return self.volume(sizes)*self.massDensity
class IsoHomSphericalGS(HomSphericalGS):
"""
Base class for isotropic and homogeneous spherical grain species.
For these grain species, Mie theory and anomalous diffraction theory can be
directly employed.
"""
def __init__(self, crefin, mieSolver):
self.crefin = crefin
return
def _scatt(self, sizes, wave):
return
class SpAsil(Grainspecies):
def getSilica_props(en):
en = en.to(u.micron, equivalencies=u.spectral())
if en.unit != _u.eV or en.unit != en.J :
raise('The unit of Energy should be in eV or Joule')
elif en.unit == en.J:
en = en * _u.J.to(_u.eV)
lamda = en.unit * _u.eV.to(_u.micron, \
equivalencies=_u.spectral())
path = cls.loadFiles['Silica']
silica = loadOprops(path)
assert lamda.unit == _u.micron, 'The wave length is not in micron'
Silica_d = Find_Interpolate(lamda,silica)
return Silica_d
def Sigma_abs(en,sizes):
m = getSilica_props(en,sizes)
QABS = []
sigma_abs = []
for i, size in enumerate(sizes):
QABS append.(miepy.mie(m,size))
sigma_abs.append.(_np.pi * size**2 * QABS[i])
return _np.array(sigma_abs)
def Sigma_sca(en,sizes):
m = getSilica_props(en,sizes)
QSCA = []
sigma_sca = []
for i, size in enumerate(sizes):
QSCA append.(miepy.mie(m,size))
sigma_sca.append.(_np.pi * size**2 * QSCA[i])
return _np.array(sigma_sca)
class SpGraphite(Grainspecies):
"""
"""
def getCarbon_props(en):
if en.unit != _u.eV or en.unit != en.J :
raise('The unit of Energy should be in eV or Joule')
elif en.unit == en.J:
en = en * _u.J.to(_u.eV)
lamda = en.to(_u.micron, \
equivalencies=_u.spectral())
path1 = cls.loadFiles['carbon_par_0.01']
c_par_0.01_ = loadOprops(path1)
path2 = cls.loadFiles['carbon_par_0.1']
c_par_0.1_ = loadOprops(path1)
path3 = cls.loadFiles['carbon_per_0.01']
c_per_0.01_d = loadOprops(path3)
path4 = cls.loadFiles['carbon_per_0.1']
c_per_0.1_d = loadOprops(path4)
assert lamda.unit == _u.micron, 'The wave length is not in micron'
#
c_par_0.01_d = Find_Interpolate(lamda,c_par_0.01_)
c_par_0.1_d = Find_Interpolate(lamda,c_par_0.1_)
c_per_0.01_d = Find_Interpolate(lamda,c_per_0.01_)
c_par_0.1_d = Find_Interpolate(lamda,c_per_0.1_)
#
return c_par_0.01_d , c_par_0.1_d , c_per_0.01_d , c_per_0.1_d
def Sigma_abs(en,sizes):
c_par_0.01_, c_par_0.1_, c_per_0.01_, c_per_0.1_ =
getCarbon_props(en,sizes)
QABS_par_0.01_ = [] ; QABS_par_0.1_ = [] ; QABS_per_0.01_ = []
QABS_per_0.1_ = []
sigma_par_0.01_ = [] ; sigma_par_0.1_ = [] ; sigma_per_0.01_ = []
sigma_per_0.01_ = []
for i, size in enumerate(sizes):
QABS_par_0.01_ = miepy.mie(c_par_0.01_ ,size)
QABS_par_0.1_ = miepy.mie(c_par_0.1_ ,size)
QABS_per_0.01_ = miepy.mie(c_per_0.01_ ,size)
QABS_per_0.1_ = miepy.mie(c_per_0.1_ ,size)
#
sigma_par_0.01_.append.(_np.pi * size**2 * QABS_par_0.01_[i])
sigma_par_0.1_.append.(_np.pi * size**2 * QABS_par_0.1_[i])
sigma_per_0.01_.append.(_np.pi * size**2 * QABS_per_0.01_[i])
sigma_per_0.1_.append.(_np.pi * size**2 * QABS_per_0.1_[i])
#
sigma_abs_0.01_ = ( 2 * _np.array(sigma_par_0.01_) + \
_np.array(sigma_per_0.01_) ) / 3
sigma_abs_0.1_ = ( 2 * _np.array(sigma_par_0.1_) + \
_np.array(sigma_per_0.1_) ) / 3
#
return sigma_abs_0.01_ , sigma_abs_0.1_
#
def Sigma_sca(en,sizes):
#
c_par_0.01_, c_par_0.1_, c_per_0.01_, c_per_0.1_ =
getCarbon_props(en,sizes)
QSCA_par_0.01_ = [] ; QSCA_par_0.1_ = [] ; QSCA_per_0.01_ = []
QSCA_per_0.1_ = []
sigma_par_0.01_ = [] ; sigma_par_0.1_ = [] ; sigma_per_0.01_ = []
sigma_per_0.01_ = []
for i, size in enumerate(sizes):
QSCA_par_0.01_ = miepy.mie(c_par_0.01_ ,size)
QSCA_par_0.1_ = miepy.mie(c_par_0.1_ ,size)
QSCA_per_0.01_ = miepy.mie(c_per_0.01_ ,size)
QSCA_per_0.1_ = miepy.mie(c_per_0.1_ ,size)
#
sigma_par_0.01_.append.(_np.pi * size**2 * QSCA_par_0.01_[i])
sigma_par_0.1_.append.(_np.pi * size**2 * QSCA_par_0.1_[i])
sigma_per_0.01_.append.(_np.pi * size**2 * QSCA_per_0.01_[i])
sigma_per_0.1_.append.(_np.pi * size**2 * QSCA_per_0.1_[i])
#
sigma_sca_0.01_ = ( 2 * _np.array(sigma_par_0.01_) + \
_np.array(sigma_per_0.01_) ) / 3
sigma_sca_0.1_ = ( 2 * _np.array(sigma_par_0.1_) + \
_np.array(sigma_per_0.1_) ) / 3
#
return sigma_sca_0.01_ , sigma_sca_0.1_
class cAxisSphericalGS(HomSphericalGS):
"""
Base class for homogeneous spherical grain species composed of anisotropic
material.
Represents grain species that are spherical but composed of a material that
has crystal axes so that its properties depend on orientation. It is
assumed that they can be treated in an "effective" way using Mie
theory/anomalous diffraction theory by averaging properties along different
axes appropriately. This is then supposed to represent the average
properties of an ensemble of randomly oriented grains.
The classic example of this are graphite grains where the crystal axis is
perpendicular to the so-called basal plane and properties are computed
using the "1/3 - 2/3 approximation", which has been shown to work well
[1]_.
References
----------
.. [1] Draine & Malhotra 1993ApJ...414..632D
"""
def __init__(self, crefins, axesWeights, mieSolver):
self.crefins = crefins
self.axesWeights = axesWeights
self.solver = mieSolver
return
def _scatt(self, sizes, wave):
return
from cosmic_dustbox import sdist
from cosmic_dustbox import Gspcs as gs
import astropy.units as _u
import os as _os
import numpy as _np
import sys
import pandas
Path = _os.path.dirname(_os.path.dirname(_os.path.realpath(__file__))) + \
'\\lib\\paramsWD01a.csv'
sys.path.insert(0, Path)
class WD01(object):
with open(Path)as f:
R_V = [] ; bc=[] ; alpha_g = [] ; beta_g = [] ; a_tg = [] ; a_cg = []
C_g=[] ; alpha_s = []; beta_s = []; a_ts = [] ; a_cs = [] ; C_s =[]
df = pandas.read_csv(f)
R_V.append(df['R_V'])
bc.append(df['b_C'])
alpha_g.append(df['\\alpha_g'])
beta_g.append(df['\\beta_g'])
a_tg.append(df['a_{t,g} [m]'])
a_cg.append(df['a_{c,g} [m]'])
C_g.append(df['C_g'])
alpha_s.append(df['\\alpha_s'])
beta_s.append(df['\\beta_s'])
a_ts.append(df['a_{t,s} [m]'])
a_cs.append(df['a_{c,s} [m]'])
C_s.append(df['C_s'])
rho_g = 2.24 *_u.g/_u.cm**3
b = _np.array([0.75 * bc[1],0.25 * bc[2]])
#
def __init__(self, Indx):
def load_file(Indx, st):
sdists = {
'gra': sdist.Log_Normal(3.5*_u.Angstrom, _np.inf, rho_g,0.4,bc,b,a0) + \
sdist.WD01_dst(3.5*_u.Angstrom, _np.inf , a_tg[Indx],
beta_g[Indx],a_cg[Indx], alpha_g[Indx],C_g[Indx]) , \
'sil': sdist.WD01(
sdist.WD01_dst(3.5*_u.Angstrom, _np.inf,a_ts[Indx], beta_s[Indx], \
a_cs[Indx],alpha_s[Indx],C_s[Indx])
}
return sdist[st]
self.sdist_gr = load_file(Indx,'gra')
self.sdist_sil = load_file(Indx,'sil')
return
#
\ No newline at end of file
###############################################################################
import astropy.units as _u
import numpy as _np
import os as _os
from scipy.special import erf as _erf
###############################################################################
......@@ -84,7 +86,7 @@ class SizeDist(object):
Overloading of ``+`` operator. Can add an instance of ``SizeDist`` (or
subclass thereof), any callable or a scalar to ``self.func``. No check
on units is performed. Returns an instance of ``self.__class__``, the
on units is performed. Returns an instance of ``SizeDist``, the
``func`` attribute of which is defined to return the corresponding sum.
If ``other`` is an instance of ``SizeDist`` (or subclass), take maximum
......@@ -122,15 +124,15 @@ class SizeDist(object):
def func(sizes):
return self.func(sizes) + other.func(sizes)
return self.__class__(sizeMin, sizeMax, func)
return SizeDist(sizeMin, sizeMax, func)
elif callable(other):
def func(sizes):
return self.func(sizes) + other(sizes)
return self.__class__(self.sizeMin, self.sizeMax, func)
return SizeDist(self.sizeMin, self.sizeMax, func)
else:
def func(sizes):
return other + self.func(sizes)
return self.__class__(self.sizeMin, self.sizeMax, func)
return SizeDist(self.sizeMin, self.sizeMax, func)
# make addition commutative
__radd__ = __add__
......@@ -165,13 +167,19 @@ class PowerLaw(SizeDist):
def __init__(self, sizeMin, sizeMax, power, C):
def f(a):
return C*a**power
super(self).__init__(sizeMin, sizeMax, f)
super().__init__(sizeMin, sizeMax, f)
return
class Log_Normal(SizeDist):
class DoubleLogNormal(SizeDist):
"""
Sum of two log normal distributions as in Eq. (2) of [1]_.
def __init__(self, sizeMin, sizeMax, rho, sgma, bc, b, a0):
References
----------
.. [1] Weingartner & Draine 2001ApJ...548..296W
"""
def __init__(self, sizeMin, sizeMax, rho, sgma, bc, a0):
# mass of carbon atom
m_C = 12.0107*_u.u
......@@ -180,8 +188,13 @@ class Log_Normal(SizeDist):
nominator = (3 * _np.exp(-4.5 * sgma**2) * bc[i] * m_C)
denominator = (
(2*_np.pi**2)**1.5 * rho * a0[i]**3 * sgma *
(1 + _np.erf((3 * sgma/_np.sqrt(2)) +
(_np.log((a0[i]/3.5/_u.angstrom).decompose().value) / (sgma * _np.sqrt(2))))))
(1 +
_erf((3 * sgma/_np.sqrt(2)) +
(_np.log((a0[i]/3.5/_u.angstrom).decompose().value) /
(sgma * _np.sqrt(2)))
)
)
)
# FIXME: what do we do if the denominator is zero
if denominator != 0:
B = (nominator/denominator).decompose().value
......@@ -190,38 +203,103 @@ class Log_Normal(SizeDist):
_B = [B_val(i) for i in range(2)]
def f(a):
return sum([_B(i)/a * _np.exp(-0.5*(
return sum([_B[i]/a * _np.exp(-0.5*(
_np.log((a/a0[i]).decompose().value)/sgma)**2)
for i in range(2)])
super(self).__init__(sizeMin, sizeMax, f)
super().__init__(sizeMin, sizeMax, f)
return
class WD01_dst(SizeDist):
class WD01ExpCutoff(SizeDist):
"""
Power law distribution with exponential cutoff as in Eqs. (4) and (5) of
[1]_.
References
----------
.. [1] Weingartner & Draine 2001ApJ...548..296W
"""
def __init__(self, sizeMin, sizeMax, alpha, beta, a_t, a_c, C):
def __init__(self, sizeMin, sizeMax, a_t, beta, a_c, alpha, C):
if beta >= 0:
def F(a):
return 1 + (beta * a) / a_t
else:
def F(a):
return 1 / (1 - (beta * a) / a_t)
if beta >= 0:
def F(a):
return 1 + (beta * a) / a_t
else:
def F(a):
return 1 / (1 - (beta * a) / a_t)
def exp_func(a):
r = _np.ones_like(a.value)
ind = _np.where(a > a_t)
r[ind] = _np.exp(-(((a[ind] - a_t)/a_c).decompose().value)**3)
return r
def exp_func(a):
r = _np.ones_like(a.value)
ind = _np.where(a > a_t)
r[ind] = _np.exp(-(((a[ind] - a_t)/a_c).decompose().value)**3)
return r
def f(a):
return C*(a / a_t)**alpha/a \
* F(a) * exp_func(a)
def f(a):
return C*(a / a_t)**alpha/a \
* F(a) * exp_func(a)
super().__init__(sizeMin, sizeMax, f)
return
super(self).__init__(sizeMin, sizeMax, f)
return
def WD01(RV, bc, case):
# pandas dataframe would be far better suited for this but don't know if
# pulling in another dependency makes sense just for this
data = _np.genfromtxt(
_os.path.join(
_os.path.dirname(__file__),
'data/sdist/WD01-2001ApJ...548..296W-Table1.txt'
),
dtype=None,
encoding=None
)
# doing == float comparison here, might come back to bite us at some point
params = data[
[
j for j, x in enumerate(data)
if (x[0] == RV and x[1] == bc and x[3] == case)
]
]
if len(params) > 1:
# this should never happen
raise ValueError("Could not uniquely identify parameter set.")
elif len(params) == 0:
raise ValueError("Could not identify parameter set.")
params = params[0]
sizeMin = 3.5*_u.angstrom
sizeMax = 10*_u.micron
rho = 2.24*_u.g/_u.cm**3
s_car = DoubleLogNormal(
sizeMin,
sizeMax,
rho,
0.4,
[0.75*params[2], 0.25*params[2]],
[3.5*_u.angstrom, 30*_u.angstrom]
)
l_car = WD01ExpCutoff(
sizeMin,
sizeMax,
params[4],
params[5],
params[6]*_u.m,
params[7]*_u.m,
params[8]
)
sil = WD01ExpCutoff(
sizeMin,
sizeMax,
params[9],
params[10],
params[11]*_u.m,
params[12]*_u.m,
params[13]
)
return s_car + l_car, sil
###############################################################################
if __name__ == "__main__":
......
from unittest import TestCase
from .. import crefin
import numpy as np
import astropy.units as u
import os
class TestFromData(TestCase):
@classmethod
def setUpClass(cls):
cls.a1 = None
cls.lam1 = np.linspace(1, 10, num=10)
cls.n1 = np.linspace(0, 10, num=10) + 1j*np.linspace(0, 10, num=10)
cls.a2 = np.array([1e-9, 1e-8])
cls.lam2 = np.linspace(1, 10, num=20)
cls.n2 = (
np.linspace(0, 8, num=40)
+ 1j*np.linspace(0, 9, num=40)).reshape(2, 20)
return
@classmethod
def tearDownClass(cls):
return
def test_no_a(self):
c = self.__class__
a = crefin.Crefin.fromData(c.a1, c.lam1, c.n1)
r = a(
np.array([1, 2])*u.micron,
np.array([1, 5, 9])*u.micron
)
self.assertTrue(
r.shape[0] == 2)
self.assertTrue(
r.shape[1] == 3)
self.assertAlmostEqual(
r[0, 0], complex(0, 0))
self.assertAlmostEqual(
r[1, 0], complex(0, 0))
# check bounds error
with self.assertRaises(ValueError) as context:
a(
np.array([1])*u.micron,
np.array([11])*u.micron
)
return
def test_normal(self):
c = self.__class__
a = crefin.Crefin.fromData(c.a2, c.lam2, c.n2)
return
def test_single_element(self):
c = self.__class__
a = crefin.Crefin.fromData(c.a2, c.lam2, c.n2)
n = a(
np.array([1e-9])*u.micron,
np.array([8])*u.micron
)
self.assertTrue
return
def test_single_element_no_bounds(self):
c = self.__class__
a = crefin.Crefin.fromData(c.a2, c.lam2, c.n2)
return
def test_normal_no_bounds(self):
c = self.__class__
a = crefin.Crefin.fromData(c.a2, c.lam2, c.n2,
bounds_error=False)
return
class TestSGPAHCrefin(TestCase):
@classmethod
def setUpClass(cls):
cls.basepath = os.path.join(
os.path.dirname(os.path.abspath(__file__)), '../data/crefin')
return
@classmethod
def tearDownClass(cls):
return
def test_parseCrefinFile_asil(self):
path = os.path.join(self.__class__.basepath, 'callindex.out_silD03')
size, data = crefin.SGPAHCrefin.parseCrefinFile(path)
self.assertAlmostEqual(0.1, size)
self.assertEqual(data.shape[0], 837)
self.assertEqual(data.shape[1], 5)
return
def test_parseCrefinFile_cpar(self):
path = os.path.join(self.__class__.basepath,
'callindex.out_CpaD03_0.01')
size, data = crefin.SGPAHCrefin.parseCrefinFile(path)
self.assertAlmostEqual(0.01, size)
self.assertEqual(data.shape[0], 386)
self.assertEqual(data.shape[1], 5)
return
def test_parseCrefinFile_cperp(self):
path = os.path.join(self.__class__.basepath,
'callindex.out_CpeD03_0.01')
size, data = crefin.SGPAHCrefin.parseCrefinFile(path)
self.assertAlmostEqual(0.01, size)
self.assertEqual(data.shape[0], 383)
self.assertEqual(data.shape[1], 5)
return
def test_fromFiles_asil(self):
path = os.path.join(self.__class__.basepath, 'callindex.out_silD03')
c = crefin.SGPAHCrefin.fromFiles([path])
a = np.array([0.1])*u.micron
wave = np.array([1.239840e5, 5.481167e2, 6.199200e-5])*u.micron
nre_expect = [2.4350+1, 2.4040+1, -1.9450e-6+1]
nim_expect = [1.1190e-3, 8.2550e-2, 1.7830e-8]
n = c(a, wave)
self.assertEqual(n.shape[0], len(a))
self.assertEqual(n.shape[1], len(wave))
for j, r in enumerate(n[0, :]):
self.assertAlmostEqual(r.real, nre_expect[j])
self.assertAlmostEqual(r.imag, nim_expect[j])
return
def test_fromFiles_asil_multiple_a(self):
path = os.path.join(self.__class__.basepath, 'callindex.out_silD03')
c = crefin.SGPAHCrefin.fromFiles([path])
a = np.array([1.0, 0.1, 0.01])*u.micron
wave = np.array([1.239840e5, 5.481167e2, 6.199200e-5])*u.micron
nre_expect = [2.4350+1, 2.4040+1, -1.9450e-6+1]
nim_expect = [1.1190e-3, 8.2550e-2, 1.7830e-8]
n = c(a, wave)
self.assertEqual(n.shape[0], len(a))
self.assertEqual(n.shape[1], len(wave))
for i in range(len(a)):
for j, r in enumerate(n[i, :]):
self.assertAlmostEqual(r.real, nre_expect[j])
self.assertAlmostEqual(r.imag, nim_expect[j])
return
def test_fromFiles_cperp(self):
paths = [
os.path.join(self.__class__.basepath, 'callindex.out_CpeD03_0.01'),
os.path.join(self.__class__.basepath, 'callindex.out_CpeD03_0.10')
]
c = crefin.SGPAHCrefin.fromFiles(paths)
a = np.array([1.0, 0.1, 0.01, 0.001])*u.micron
wave = np.array([1.239840E+05, 6.322489E+00, 6.199200E-05])*u.micron
nre_expect = np.array([
[1.0596E+03, 4.5430E+00, 1.5405E-10],
[1.0596E+03, 4.5430E+00, 1.5405E-10],
[3.4137E+02, 4.5568E+00, 1.5405E-10],
[3.4137E+02, 4.5568E+00, 1.5405E-10],
])+1
nim_expect = [
[1.0636E+03, 4.2997E+00, 1.4471E-17],
[1.0636E+03, 4.2997E+00, 1.4471E-17],
[3.4149E+02, 4.3122E+00, 1.3963E-16],
[3.4149E+02, 4.3122E+00, 1.3963E-16],
]
n = c(a, wave)
self.assertEqual(n.shape[0], len(a))
self.assertEqual(n.shape[1], len(wave))
for i in [0, 1]:
for j, r in enumerate(n[i, :]):
self.assertAlmostEqual(r.real, nre_expect[i][j],
msg='Failed at i='+str(i)+' j='+str(j))
self.assertAlmostEqual(r.imag, nim_expect[i][j])
return
def test_fromFiles_cpar(self):
paths = [
os.path.join(self.__class__.basepath, 'callindex.out_CpaD03_0.01'),
os.path.join(self.__class__.basepath, 'callindex.out_CpaD03_0.10')
]
c = crefin.SGPAHCrefin.fromFiles(paths)
a = np.array([1.0, 0.1, 0.01, 0.001])*u.micron
wave = np.array([1.239840E+05, 6.199200E+00, 6.199200E-05])*u.micron
nre_expect = np.array([
[1.5051E+02, 1.1266E+00, 2.3165E-10],
[1.5051E+02, 1.1266E+00, 2.3165E-10],
[1.4329E+02, 1.0261E+00, 2.3165E-10],
[1.4329E+02, 1.0261E+00, 2.3165E-10],
])+1
nim_expect = [
[1.5156E+02, 2.3038E-02, 1.4161E-17],
[1.5156E+02, 2.3038E-02, 1.4161E-17],
[1.4434E+02, 3.0850E+00, 1.5614E-17],
[1.4434E+02, 3.0850E+00, 1.5614E-17],
]
n = c(a, wave)
self.assertEqual(n.shape[0], len(a))
self.assertEqual(n.shape[1], len(wave))
for i in [0, 1]:
for j, r in enumerate(n[i, :]):
self.assertAlmostEqual(r.real, nre_expect[i][j],
msg='Failed at i='+str(i)+' j='+str(j))
self.assertAlmostEqual(r.imag, nim_expect[i][j])
return
def test_fromFiles_cpar(self):
paths = [
os.path.join(self.__class__.basepath, 'callindex.out_CpaD03_0.01'),
os.path.join(self.__class__.basepath, 'callindex.out_CpaD03_0.10')
]
a = crefin.SGPAHCrefin.fromFiles(paths)
return
def test_fromFiles_diff_wavelengths(self):
paths = [
os.path.join(self.__class__.basepath, 'callindex.out_CpeD03_0.01'),
os.path.join(self.__class__.basepath, 'callindex.out_CpaD03_0.10')
]
with self.assertRaises(ValueError) as context:
a = crefin.SGPAHCrefin.fromFiles(paths)
return
def test_fromFiles_same_size(self):
paths = [
os.path.join(self.__class__.basepath, 'callindex.out_CpeD03_0.01'),
os.path.join(self.__class__.basepath, 'callindex.out_CpeD03_0.01')
]
with self.assertRaises(ValueError) as context:
a = crefin.SGPAHCrefin.fromFiles(paths)
return
......@@ -63,7 +63,9 @@ class TestSdist(TestCase):
r = a(self.__class__._sizes)
r[np.where(r != 0)] = r[np.where(r != 0)] + scalar
self.assertTrue(
np.all(b(self.__class__._sizes) == r))
np.all(
np.isclose(b(self.__class__._sizes), r, rtol=1e-10, atol=0)
))
return
def test_add_scalar_quantity_float(self):
......@@ -73,5 +75,41 @@ class TestSdist(TestCase):
r = a(self.__class__._sizes)
r[np.where(r != 0)] = r[np.where(r != 0)] + scalar
self.assertTrue(
np.all(b(self.__class__._sizes) == r))
np.all(
np.isclose(b(self.__class__._sizes), r, rtol=1e-10, atol=0)
))
return
class TestWD01(TestCase):
def test_smoke_test(self):
sdist.WD01(3.1, 0.0, 'A')
return
def test_wrong_params(self):
with self.assertRaises(ValueError) as context:
sdist.WD01(3.1, 8.0, 'A')
with self.assertRaises(ValueError) as context:
sdist.WD01(3.1, 8.0, 'C')
return
def test_out_of_bounds(self):
for sd in sdist.WD01(3.1, 0.0, 'A'):
self.assertEqual(
sd(np.array([1])*u.angstrom)[0],
0.0/u.angstrom
)
self.assertEqual(
sd(np.array([11])*u.micron)[0],
0.0/u.angstrom
)
for sd in sdist.WD01(3.1, 6.0, 'A'):
self.assertEqual(
sd(np.array([1])*u.angstrom)[0],
0.0/u.angstrom
)
self.assertEqual(
sd(np.array([11])*u.micron)[0],
0.0/u.angstrom
)
\documentclass[crop, tikz]{standalone}
\usetikzlibrary{math}
\begin{document}
\begin{tikzpicture}
\draw (0, 0) rectangle (10, 10);
% red circles
% import numpy as np
% for x, y, r in zip(10*(np.random.random(5)), 10*(np.random.random(5)), 0.5*(np.random.random(5))):
% print(str(x)+'/'+str(y)+'/'+str(r)+',')
\foreach \xcord / \ycord / \rcord in {
7.679061860689958/3.682979874355471/0.4565838404947177,
9.79222926473049/5.283254108481699/0.014712651759808348,
9.228163854796406/0.465467574267856/0.12417262955376379,
8.466903599867722/2.6335052370495404/0.15078170372073274,
3.8702372331492265/5.903399027966185/0.23838775192054612,
4.986562338321869/1.7117275570073676/0.044592439438311426,
7.448863277700564/8.226461665494632/0.06244327990940718,
6.306622283446704/0.5491178028038102/0.3074134849629631,
6.831041908490814/8.333630001447492/0.05653784106297699
}{
\draw[red, thick] (\xcord, \ycord) circle (\rcord);
}
% blue circles
\foreach \x / \y / \r in {
1.2293473513913522/4.117458330953575/0.8282351326975819,
7.687835189158383/6.525741647740903/0.4959151367782896,
5.07067407779401/4.413937901291682/0.32343724210759717,
6.252736357650197/8.899430585033288/0.11194328613448434
}{
\draw[blue, thick] (\x, \y) circle (\r);
}
% green ellipses
\foreach \x / \y / \a / \b / \ang in {
5.140384055187862/5.996278145733781/0.44458636535075624/0.13337590960522686/42.8959206218665,
3.234418862988054/4.945278056645622/0.36019401697947695/0.10805820509384308/132.77692008629268,
0.6563379428642213/6.504964785235348/0.19991550807517933/0.059974652422553794/124.22338658231189,
9.197961905303377/2.205439358917957/0.27526797473586584/0.08258039242075975/100.8813649933555,
0.3062812177004437/1.4677046604730348/0.42017352508362876/0.12605205752508863/68.71741736618897
}{
\draw[green, thick, rotate around={\ang:(\x,\y)}] (\x, \y) ellipse (\a cm and \b cm);
}
% blue triangles
% import numpy as np
% for x, y, a, ang in zip(10*(np.random.random(5)), 10*(np.random.random(5)), 0.5*(np.random.random(5)), 180*(np.random.random(5))):
% print(str(x)+'/'+str(y)+'/'+str(a)+'/'+str(0.3*a)+'/'+str(ang)+',')
\foreach \x / \y / \a / \b / \ang in {
3.1825266671917642/8.739378860463935/0.3076868629010771/0.09230605887032313/55.44224580874666,
5.9953999833826535/1.1586268023457924/0.31485139062877743/0.09445541718863322/71.98786362988174,
9.810321336977507/9.65425711309965/0.09875441625998671/0.02962632487799601/76.7939584508893,
9.05699195598717/5.39917055694529/0.419809206167695/0.1259427618503085/22.929429244839067,
4.835379117540814/2.649414224653106/0.13385616813162254/0.04015685043948676/13.633175956536425,
6.858701607071196/0.6665273841406083/0.19102094661889218/0.05730628398566765/142.1722416667769,
0.25352036670998834/9.508868483290078/0.10883725448703785/0.03265117634611135/87.79022903558271
}{
\draw[blue, thick, rotate around={\ang:(\x,\y)}] (\x, \y) \foreach \s in {120, 240}{
-- ++ (\s:\a)
} -- cycle ++ (90:\a);
}
\end{tikzpicture}
\end{document}
\documentclass[crop, tikz]{standalone}
\usetikzlibrary{math}
\usetikzlibrary{calc}
\usetikzlibrary{positioning}
\begin{document}
\begin{tikzpicture}
\draw (0, 0) rectangle (5, 5);
\draw (5, 0) rectangle (10, 5);
\draw (0, 5) rectangle (5, 10);
\draw (5, 5) rectangle (10, 10);
% red circles
% import numpy as np
% for x, y, r in zip(10*(np.random.random(5)), 10*(np.random.random(5)), 0.5*(np.random.random(5))):
% print(str(x)+'/'+str(y)+'/'+str(r)+',')
\foreach \xcord / \ycord / \rcord in {
{1*5/4} / {3*5/4} / 0.4565838404947177,
{2*5/4} / {3*5/4} / 0.3074134849629631,
{3*5/4} / {3*5/4} / 0.23838775192054612,
{1*5/4} / {2*5/4} / 0.15078170372073274,
{2*5/4} / {2*5/4} / 0.12417262955376379,
{3*5/4} / {2*5/4} / 0.06244327990940718,
{1*5/4} / {1*5/4} / 0.044592439438311426,
{2*5/4} / {1*5/4} / 0.05653784106297699,
{3*5/4} / {1*5/4} / 0.014712651759808348
}{
\draw[red, thick] (\xcord, \ycord) circle (\rcord);
}
% blue circles
\foreach \x / \y / \r in {
{5+1*5/3} / {2*5/3} / 0.8282351326975819,
{5+2*5/3} / {2*5/3} / 0.4959151367782896,
{5+1*5/3} / {1*5/3} / 0.32343724210759717,
{5+2*5/3} / {1*5/3} / 0.11194328613448434
}{
\draw[blue, thick] (\x,\y) circle (\r);
}
% green ellipses
\foreach \x / \y / \a / \b in {
{1*5/4} / {5+2*5/3} / 0.44458636535075624 / 0.13337590960522686 ,
{2*5/4} / {5+2*5/3} / 0.42017352508362876 / 0.12605205752508863 ,
{3*5/4} / {5+2*5/3} / 0.36019401697947695 / 0.10805820509384308 ,
{1*5/3} / {5+1*5/3} / 0.27526797473586584 / 0.08258039242075975 ,
{2*5/3} / {5+1*5/3} / 0.19991550807517933 / 0.059974652422553794
}{
\draw[green, thick] (\x, \y) ellipse (\b cm and \a cm);
}
% blue triangles
% import numpy as np
% for x, y, a, ang in zip(10*(np.random.random(5)), 10*(np.random.random(5)), 0.5*(np.random.random(5)), 180*(np.random.random(5))):
% print(str(x)+'/'+str(y)+'/'+str(a)+'/'+str(0.3*a)+'/'+str(ang)+',')
% This computation of the x coords is ugly as all hell. Need to right
% shift the triangles by \a/2 since they're anchored at the bottom right
% corner. Couldn't get it to work in the foreach loop.
\foreach \x / \y / \a in {
{5+1*5/4+0.419809206167695/2} / {5+3*5/4} / 0.419809206167695 ,
{5+2*5/4+0.31485139062877743/2} / {5+3*5/4} / 0.31485139062877743,
{5+3*5/4+0.3076868629010771/2} / {5+3*5/4} / 0.3076868629010771 ,
{5+1*5/4+0.19102094661889218/2} / {5+2*5/4} / 0.19102094661889218,
{5+2*5/4+0.13385616813162254/2} / {5+2*5/4} / 0.13385616813162254,
{5+3*5/4+0.10883725448703785/2} / {5+2*5/4} / 0.10883725448703785,
{5+2*5/4+0.09875441625998671/2} / {5+1*5/4} / 0.09875441625998671
}{
\draw[blue, thick] (\x,\y) \foreach \s in {120, 240}{
-- ++ (\s:\a)
} -- cycle ++ (90:\a);
}
\end{tikzpicture}
\end{document}
......@@ -2,20 +2,22 @@ from setuptools import setup
with open('README.md') as fh:
long_desc = fh.read()
name = 'cosmic_dustbox'
setup(
name='cosmic_dustbox',
version='0.1.dev',
name=name,
version='0.1.dev0',
author_email='findessp@yandex.ru',
description='Toolbox for cosmic dust.',
long_description=long_desc,
long_description_content_type='text/markdown',
url='https://github.com/findesgh/cosmic_dustbox',
url='https://github.com/findesgh/'+name,
license='GPL-3.0',
packages=['cosmic_dustbox'],
packages=[name],
install_requires=[
'numpy',
'astropy'
'astropy',
'scipy'
],
tests_require=[
'pytest'
......