Commit 44885d00 authored by Holger Niemann's avatar Holger Niemann
Browse files

Merge branch 'Holgers' into 'master'

Holgers

See merge request !45
parents 0ad0604a 420e76e8
<<<<<<< HEAD
XX.06.2020: Update to V3.5.0
- add download maximum heat flux and wetted area
16.06.2020: Update to V3.4.1
- updated and improved documentation
- include chache function and a request for all ports for heat flux and divertor load
08.02.2020: Update to V3.4.0
- Bugfix: download_images_by_times produzierte doppelte Werte im Zeit und Bildvektor
-
- Bugfix: download divertor load gave print back that request was made, although no request was made (fixed)
- Bugfix: correction of version bug from last commit and change of default value for the profile extraction functions to None
- Readme added
Update to V3.3.3:
-
- added function for peaking factor and strike line width calculation in tools
- bugfix: bug fix get_trigger_from_PID which caused an error in get_latest_version if it was used for non IR data
Update to V3.3.2:
-
- extended feedback for download_heatflux
- changes of wetted area calculation
- changes of wetted area calculation
- bugfix: get_trigger_PID, get_finger and during import (see issues 6,7)
Update to V3.3.1:
-
- bugfix: FOV for AEF50 was not working correctly
Update to V3.3.0:
-
- IR_config_constants: added path to Test archive and use this variable in downloadversionIRdata when building an archive path
- downloadversionIRdata: consistent use of testmode parameter
- downloadversionIRdata: added testmode to all reading routines allowing reading from Test archive
- IR_image_tools: no longer sets working directory to script file location
- IR_config_constants: sets parameter_file_path as absolute path
- downloadversionIRdata: new function get_trigger_from_PID() wraps get_program_from_PID() and allows getting timestamps for lab data
- downloadversionIRdata: download_images_by_time_via_png() now uses nanosecond timestamps to call AKF_2.get_time_intervals()
- downloadversionIRdata: disable DeprecationWarning in import of archivedb (timezone issue on Windows)
- downloadversionIRdata: get_NUC_by_times() uses the reference cold frame in case no cold frame (NUC frame) was found for this time
- downloadversionIRdata: unified use of stoptime instead of endtime
- downloadversionIRdata: expanded testing sections (profile, coldframe, timestamps and scene model)
Update to V3.2.5:
- remove of old, not needed function and dependencies, update of download_background function
Update to V3.2.4:
- smaller bugfixes and setup.py got added
Update to V3.2.3:
- bugfix: finding right version for meta informations
Update to V3.2.2:
......@@ -84,7 +105,8 @@ bugfixes in downloadversionIRdata:
- fixed: wrong time intervall for TC divertor data
Versions:
V3.4.0: chaching functionality included, request for all cameras included + Bugfixes
V3.5.0: download of wetted area, peak loads for different targets and
V3.4.1: chaching functionality included, request for all cameras included + Bugfixes
V3.3.0: code-cleaning, unification of variable names, new functions: get_trigger_from_PID,
V3.2.0: download of scene models possible, temperature profiles can be extracted, downsampled temperature images available
V3.1.0: add of divertor loads in the upload and the download
......
# -*- coding: utf-8 -*-
"""
Created on Wed May 9 14:56:32 2018
Version: 3.3.3
Version: 3.4.0
@author: Holger Niemann, Peter Drewelow, Yu Gao
mainly to clean up the downloadversionIRdata code
......@@ -24,14 +24,17 @@ import datetime
def get_OP_by_time(time_ns=None, shot_no=None, program_str=None):
'''Derives operation phase (OP) of W7-X based on either:
a nanosacond time stamp, a MDSplus style shot no. or a program ID.
IN:
time_ns - integer of nanosecond time stamp,
INPUT
-----
time_ns: - integer of nanosecond time stamp,
e.g. 1511972727249834301 (OPTIONAL)
shot_no - integer of MDSplus style shot number,
e.g. 171207022 (OPTIONAL)
program_str - string of CoDaQ ArchiveDB style prgram number or date,
e.g. '20171207.022' or '20171207' (OPTIONAL)
RETURN:
RESULT
------
conn - MDSplus connection object, to be used in e.g. 1511972727249834301
read_MDSplus_image_simple(), read_MDSplus_metadata()
'''
......@@ -64,6 +67,16 @@ def get_OP_by_time(time_ns=None, shot_no=None, program_str=None):
return OP
def bestimmtheitsmass_general(data, fit):
"""
INPUT
------
RESULT
------
NOTE
------
"""
R = 0
if len(fit) == len(data):
mittel = np.sum(data)/len(data)
......@@ -74,6 +87,16 @@ def bestimmtheitsmass_general(data, fit):
return R
def bestimmheitsmass_linear(data, fit, debugmode=False):
"""
INPUT
------
RESULT
------
NOTE
------
"""
R2 = 0
if len(fit) == len(data):
mittel_D = np.mean(data)#np.sum(data)/len(data)
......@@ -87,12 +110,32 @@ def bestimmheitsmass_linear(data, fit, debugmode=False):
return R2
def quad_abweich_mittel(data, mittel):
"""
INPUT
------
RESULT
------
NOTE
------
"""
R = 0
for i in data:
R = R+(i-mittel)**2
return R
def quad_abweich(data, fit):
"""
INPUT
------
RESULT
------
NOTE
------
"""
R = 0
if len(fit) == len(data):
for i in range(len(data)):
......@@ -102,6 +145,16 @@ def quad_abweich(data, fit):
return R
def find_nearest(array, value):
"""
INPUT
------
RESULT
------
NOTE
------
"""
#a=array
a = [x - value for x in array]
mini = np.min(np.abs(a))
......@@ -112,6 +165,15 @@ def find_nearest(array, value):
def check_coldframe(coldframe, references=None, threshold=0.5, plot_it=False):
'''
return true/false and the quality factor
INPUT
------
RESULT
------
NOTE
------
'''
shapi = np.shape(coldframe)
##function (np.arange(0,768)-384)**(2)/900-50
......@@ -147,6 +209,14 @@ def check_coldframe(coldframe, references=None, threshold=0.5, plot_it=False):
def check_coldframe_by_refframe(coldframe, reference_frame, threshold=0.8, plot_it=False):
'''
INPUT
------
RESULT
------
NOTE
------
'''
references = []
shapi = np.shape(reference_frame)
......@@ -158,6 +228,15 @@ def check_coldframe_by_refframe(coldframe, reference_frame, threshold=0.8, plot_
def check_backgroundframe(backgroundframe, threshold=50):
'''
return true or false
INPUT
------
RESULT
------
NOTE
------
'''
shapi = np.shape(backgroundframe)
valid = True
......@@ -181,6 +260,15 @@ def read_bad_pixels_from_file(port, shot_no=None, program=None, time_ns=None):
OUT
bad_pixle_list - list of tuples (row,column) of pixel coordinates
as integer
INPUT
------
RESULT
------
NOTE
------
'''
if shot_no is not None:
OP = get_OP_by_time(shot_no=shot_no)
......@@ -204,6 +292,15 @@ def find_outlier_pixels(frame, tolerance=3, plot_it=False):#worry_about_edges=Tr
'''
This function finds the bad pixels in a 2D dataset.
Tolerance is the number of standard deviations used for cutoff.
INPUT
------
RESULT
------
NOTE
------
'''
frame = np.array(frame)#, dtype=int)
from scipy.ndimage import median_filter
......@@ -242,6 +339,14 @@ def find_outlier_pixels(frame, tolerance=3, plot_it=False):#worry_about_edges=Tr
def correct_images(images, badpixels, verbose=0):
'''
INPUT
------
RESULT
------
NOTE
------
'''
if type(badpixels) != int:
if type(images) == list:
......@@ -263,8 +368,10 @@ def restore_bad_pixels(frames, bad_pixel, by_list=True, check_neighbours=True, p
sure that adjacent pixels are not bad (time consuming). Default is to use
a list of bad pixels and a for loop. For many bad pixels consider using
the optinal alternative using a bad pixel mask.
IN:
frames - either list of frames as 2D numpy array,
INPUT
------
frames - either list of frames as 2D numpy array,
or 3D numpy array (frame number, n_rows, n_cols),
or 2D numpy array (n_rows, n_cols)
bad_pixel - either list of tuples of bad pixel coordinates,
......@@ -280,10 +387,13 @@ def restore_bad_pixels(frames, bad_pixel, by_list=True, check_neighbours=True, p
results or not
(OPTIONAL: if not provided, switched off)
verbose - integer of feedback level (amount of prints)
(OPTIONAL: if not provided, only ERROR output)
RETURN:
(OPTIONAL: if not provided, only ERROR output)
RESULT
------
frames - 3D numpy array (frame number, n_rows, n_cols) of
corrected frames
NOTE
------
"""
# make sure frames is correctly shaped
......@@ -430,6 +540,14 @@ def restore_bad_pixels(frames, bad_pixel, by_list=True, check_neighbours=True, p
def generate_new_hot_image(cold,reference_cold,reference_hot):
'''
INPUT
------
RESULT
------
NOTE
------
'''
if cold is None or reference_cold is None or reference_hot is None:
raise Exception("generate_new_hot_image: Cannot Calculate new Hot image, if images are missing!")
......@@ -438,6 +556,14 @@ def generate_new_hot_image(cold,reference_cold,reference_hot):
def calculate_gain_offset_image_pix(cold_image,hot_image=None,reference_cold=None,reference_hot=None,verbose=0):
'''
INPUT
------
RESULT
------
NOTE
------
'''
if hot_image is None:
hot_image=generate_new_hot_image(cold_image,reference_cold,reference_hot)
......@@ -452,6 +578,16 @@ def calculate_gain_offset_image_pix(cold_image,hot_image=None,reference_cold=Non
return Gain_rel,Offset_rel
def calculate_gain_offset_image(cold_image,hot_image=None,reference_cold=None,reference_hot=None,verbose=0):
"""
INPUT
------
RESULT
------
NOTE
------
"""
if hot_image is None:
hot_image=generate_new_hot_image(cold_image,reference_cold,reference_hot)
if verbose>0:
......@@ -502,17 +638,53 @@ def calculate_gain_offset_image(cold_image,hot_image=None,reference_cold=None,re
#==============================================================================
def reconstruct_coldframe (exposuretime, sT, a, bnew, coldref):
"""
INPUT
------
exposuretime: integer
the exposure time
sT:
a:
bnew:
coldref: numpy array
the reference cold frame as the base for the reconstruction
RESULT
------
NOTE
------
"""
cirebuild = a * sT + bnew * exposuretime + coldref
return cirebuild
#%% other functions
def check_dublicates(array):
"""
INPUT
------
RESULT
------
NOTE
------
"""
a = array
import collections
return [item for item, count in collections.Counter(a).items() if count > 1]
def check_dublicates_2(array):
"""
INPUT
------
RESULT
------
NOTE
------
"""
seen = set()
uniq = []
for x in array:
......@@ -523,11 +695,29 @@ def check_dublicates_2(array):
def get_work_list(pipepath,typ="q"):
"""
INPUT
------
pipepath: string
the path to the folder where the files are located
typ: string
the typ of data which is requested in the working list\n
possiblities: q, Aw, qpeak, width, load\n
or anything else for the problematic programs
RESULT
------
cam_programs: list
a list containing two coloumns, cameras and programs
reasons: list, optional, only for problematic programs
a list showing the reasons, why data are not processed
NOTE
------
"""
today=datetime.datetime.now()
cam_programs=[]
if typ in ('q','load'):
if typ in ('q_old','load_old'):
f=open(pipepath+str(today.year)+str(today.month)+"_"+typ+"_requests.txt")
elif typ in ('q','load','qpeak','Aw','width'):
f=open(pipepath+"Auto_"+typ+"_requests.txt")
else:
reasons=[]
f = open(pipepath+"problematic_programs.txt")
......@@ -1191,6 +1381,7 @@ def derive_wetted_area_per_module(heat_flux, mapping, mode='average', q_max=None
------
total_wetted_area: float or numpy array
wetted area in a shape that depends on the mode (see NOTES)
in average it is (upper,lower)
q_max: float or numpy array
peak heat flux used for normalizatin in a shape that depends on the mode (see NOTES)
NOTES
......@@ -1571,4 +1762,44 @@ def derive_wetted_area_per_module(heat_flux, mapping, mode='average', q_max=None
plt.legend()
plt.show()
return total_wetted_area, q_max
\ No newline at end of file
return total_wetted_area, q_max
def convert_Bad_Pixels(Badpixels,map2D):
"""
transform the badpixel information from the camera images into a bad pixel
information for the heat flux images
INPUT
-----
Badpixels: 2D array
image of the bad pixels with 0 no bad pixel and >0 as bad pixel
map2D: dictonary of the
see downloadversionIRdata.download_heatflux_scene_model_reference()
RESULT
-----
heatflux_badpixel: numpy array
good pixels have a zero, bad pixels have a 1
"""
heatflux_badpixel=np.zeros(np.shape(map2D['Pixel_Y']))
PX=map2D['Pixel_X']
PY=map2D['Pixel_Y']
tlocos=np.where(np.asarray(Badpixels)>0)
locos=[]
for i in range(len(tlocos[0])):
locos.append((tlocos[0][i],tlocos[1][i]))
for ele in locos:
Xloc=np.where(PX==ele[1])
Yloc=np.where(PY==ele[0])
if len(Xloc[0])>0 and len(Yloc[0])>0:#okay there are points in the heat flux image with the coordinates
PointsX=[]
# PointsY=[]
for k in range(len(Xloc[0])):
PointsX.append((Xloc[0][k],Xloc[1][k]))
for L in range(len(Yloc[0])):
YY=(Yloc[0][L],Yloc[1][L])
if YY in PointsX:
heatflux_badpixel[YY[0],YY[1]]=1
# PointsY.append((Yloc[0][L],Yloc[1][L]))
return heatflux_badpixel
\ No newline at end of file
......@@ -6,3 +6,6 @@ download:
- implement download of the stored temperature data (After the upload)
- implement download of the stored heat flux data --> done in V3.0.0
- implement download of FLIR data --> Done in V2.8.0, in testing phase
- implement caching
- implement download of wetted area, peak heat flux and average strike line width
- implement request of all ports
This diff is collapsed.
......@@ -16,8 +16,8 @@ if __name__ == '__main__':
#%% loading data
print(datetime.datetime.now(), "start")
status, time, images, valid = downIR.get_temp_from_raw_by_program(10,
"20180920.017",
status, time, images, valid = downIR.get_temp_from_raw_by_program(51,
"20171206.045",
time_window=[1, 1.1],
emi=0.8,
T_version=2,
......
# -*- coding: utf-8 -*-
"""
Created on Thu Nov 29 17:41:40 2018
V3.2.0
V3.4.1
@author: holn
"""
import numpy as np
......@@ -18,7 +18,7 @@ if __name__=='__main__':
#%% loading data
print(datetime.datetime.now(),"start")
status,times,images=IR.download_heatflux_by_program(port,program,time_window=[0,2],version=2,threads=1,verbose=5,testmode=False)
status,times,images=IR.download_heatflux_by_program(port,program,time_window=[0,2],version=2,verbose=5,testmode=False)
print('done')
#%% plotting data
......
......@@ -2,7 +2,7 @@ from setuptools import setup, find_packages
setup(
name = 'ir-data-access',
version = '3.3.2',
version = '3.4.0',
author = 'Holger Niemann, Peter Drewelow',
author_email = 'holger.niemann@ipp.mpg.de',
description = 'Access Frontend for IR camera data',
......@@ -16,7 +16,7 @@ setup(
'plot_heatflux_example'
],
data_files=[
('',['upload_config','CHANGELOG','ToDo.txt']),
('',['upload_config','CHANGELOG','ToDo.txt','README.md']),
('data',['data/AEF10_coldframes_background_fails_real.txt',
'data/AEF11_coldframes_background_fails_real.txt',
'data/AEF20_coldframes_background_fails_real.txt',
......@@ -30,7 +30,7 @@ setup(
'data/finger_info_TDU.csv'])
],
install_requires = [
'archivedb>=0.2.0'
'w7xarchive>=0.11.21'
]
);
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment