Using the package to fit multi-component atmospheres using the IBIS8542Model class

Initialising the model and loading the data

Import the packages required for the analysis.

[ ]:
import mcalf.models
import numpy as np
from import readsav
from import fits

Load the wavelength positions that IBIS collected data at. Also load the prefilter correction to divide the raw spectra by. Example shows how IDL SAV files can be read along with CSV files.

[ ]:
original_wavelengths = readsav('wavelengths_original.sav')['wavelengths']
prefilter_response = np.loadtxt('prefilter.csv')

Initialise the IBIS 8542 Model with the default parameters and the specific wavelengths and prefilter. Alternatively, a config file can be specified.

[ ]:
m = mcalf.models.IBIS8542Model(original_wavelengths=original_wavelengths, prefilter_response=prefilter_response)
#m = mcalf.models.IBIS8542Model(config="config.yml")  # See `config.yml` for more details.

Import the neural network ground truth dataset and train it. Alternativly, a pre-trained neural network can be loaded.

[ ]:
import pickle  # Load the trained model from file.
# pkl = open('trainedneuralnetwork.pkl', 'rb')
# m.neural_network = pickle.load(pkl)  # Overwrite the default untrained model

# Load ground truth data
labels = np.load('labels.npy')
spectra = np.load('labeled_data.npy')

# Train the neural network on the first half of the ground truth data
m.train(spectra[:100], labels[:100])

# Optionally, save the trained neural network so it can be loaded again later
# pkl = open('trainedneuralnetwork.pkl', 'wb')
# pickle.dump(m.neural_network, pkl)
# pkl.close()

# Test the neural network with the rest of the ground truth data
m.test(spectra[100:], labels[100:])

Load the spectral data from file.

[ ]:
hdul ='IBIS_scan_00100.fits')
raw = hdul[0].data

Mask the region that you would like to process.

[ ]:
# Load the mask
from mcalf.utils.mask import genmask
mask = genmask(width=1000, height=1000, radius=460, right_shift=-5, up_shift=10)
# mask = np.load('umbra_mask.npy')  # Or use a premade one that can be any shape.

# Apply the mask
mask = np.repeat(mask[np.newaxis, :, :], len(raw), axis=0)  # Duplicate along the wavelength dimension of `raw'
raw[~mask] = np.nan  # Code will ignore nan pixels

Import the masked data into the IBIS 8542 Model.

[ ]:
m.load_array(raw, ['wavelength', 'row', 'column'])

# Multiple times can be loaded at once, but you'll have to adapt the above masking code
# m.load_array(umbra_data, ['time', 'wavelength', 'row', 'column'])

Load the calculated background values (continuum intensities) for all the spectra.

[ ]:
bg = np.load('background_averaged.npy', mmap_mode='r')[100]
# Use mmap_mode so you can quickly pick the scan that you are working with without loading the whole file
m.load_background(bg, ['row', 'column'])

# Multiple times can be loaded at once
# m.load_background(bg, ['time', 'row', 'column'])

Now the model has been fully initialised. You may wish to adjust some more parameters. See IBIS8542Model docstring for details.

Using the model to classify spectra

The loaded spectra can be now classified.

[ ]:
# Select the data that you would like to classify
classifications_map = m.classify_spectra(row=range(1000), column=range(1000))
# classifications_map = m.classify_spectra(row=[300, 301, 302], column=range(500, 505))

# The following will give you (300, 500), (300, 600), (400, 500), and (400, 600)
# classifications_map = m.classify_spectra(row=[300, 400], column=[500, 600])

# Process the result to make it easier to plot
classifications_map = np.asarray(classifications_map[0], dtype=float)  # Zero index for the first (and only) loaded time
classifications_map[classifications_map == -1] = np.nan  # Replace the invalid values with nan to aid visualisation

You can then run analysis on the classifications or plot them.

[ ]:
import matplotlib.pyplot as plt

Using the model to fit the spectra and find velocities

You can simply fit a single spectrum and plot it as follows,

[ ]:
fit =, column=600)[0]  # The zero index selects the first (and only) FitResult generated

# You can then call the plot method on the FitResult, remembering to specift the model used such that it can also plot
# the observed spectrum and average central wavelength
fit.plot(m, separate=True)  # If a double Voigt model is fitted, this will show the two components separately

# Alternative, equivalent, method

If you have multiple fits, they will always be returned as a 1D list of FitResult objects, but the you can find the actual index as follows,

[ ]:
fit1, fit2 =[500, 501], column=500)

# Index of fit1 [<time>, <row>, <column>], and other details
print("Index:", fit1.index, fit2.index)
print("Fitted parameters:", fit1.parameters, fit2.parameters)
print("Classification assigned:", fit1.classification, fit2.classification)
print("Profile used:", fit1.profile, fit2.profile)
print("Was the method able to produce a result?", fit1.success, fit2.success)

# You can find the velocity using:

#   (the model `m' must be specified such that the stationary line core wavelength is known)
# Quiescent is the default velocity if not specified:
print("Quiescent velocity:", fit1.velocity(m, vtype='quiescent'), fit2.velocity(m, vtype='quiescent'))
# The following will work if a double Voigt fit is done (otherwise nan):
print("Active velocity:", fit1.velocity(m, vtype='active'), fit2.velocity(m, vtype='active'))


Running big jobs

If you are fitting a very large number of velocities the FitResults class is handy as you can append a FitResult to it and it will extract the data into arrays that can easily be saved.

[ ]:
results = mcalf.models.FitResults((1000, 1000), 8)  # Assuming a loaded array of 1000 by 1000 rows and columns,
# and max 8 fitted parameters per spectrum.

You can then run the following code to add the previous fits to it,

[ ]:

And then you can extract the arrays,

[ ]:
# results.parameters
# results.classifications
# results.profile
# results.success

# The velocities can be generated in bulk too

# results.velocities(m, vtype='quiescent')
# results.velocities(m, vtype='active')

When running a big job you can take the initiallised model, m, from the first section, and run the following,

[ ]:
# Assuming 1000 x 1000 rows and columns
results = mcalf.models.FitResults((1000, 1000), 8)

for i in range(0, 1000, 100):  # Do 10 batches of 100 rows

    # Fit the batch, using the specified number of processing pools
    results_batch =, i+100), column=range(1000), n_pools=32)

    for result in results_batch:  # Take each FitResult object in the batch
        results.append(result)  # And append it to the FitResults object

The arrays can be extracted as shown above, and then saved in your desired format.

Any spectra that were masked will be skipped and will not be outputted by the function.

If you are processing multiple scans, these can be read into the model, m, yet you may have to adapt the code to stay within the memory limitations of your machine. Such workarounds include, calling m.load_array() and m.load_background() multiple time in the same program, i.e. load one IBIS scan, fit it, extract the result, load the next IBIS scan and repeat.