Models
This module contains all the models for different CDI Reconstructions
All the reconstructions are coordinated through the ptychography models
defined here. The models are, at their core, just subclasses of the
torch.nn.model
class, so they contain the same structure of
parameters, etc. Their central functionality is as a simulation that maps
some input (usually, the index number of a scan point) to an output that
corresponds to the measured data (usually, a diffraction pattern). This
model can then be used as the heart of an automatic differentiation
reconstruction which retrieves the parameters that were used in the model.
A main CDIModel class is defined in the base.py file, and models for various CDI geometries can be defined as subclasses of this base model. The subclasses of the main CDIModel class are required to implement a set of functions defined in the base.py file. Example implementations of these functions can be found in the code for the SimplePtycho class.
Finally, it is recommended to read through the tutorial section on defining a new ptychography model before attempting to do so.
- class cdtools.models.CDIModel
Bases:
Module
This base model defines all the functions that must be exposed for a valid CDIModel subclass
Most of the functions only raise a NotImplementedError at this level and must be explicitly defined by any subclass - these are noted explocitly in the module-level intro. The work of defining the various subclasses boils down to creating an appropriate implementation for this set of functions.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__()
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(*args)
The complete forward model
This model relies on composing the interaction, forward propagator, and measurement functions which are required to be defined by all subclasses. It therefore should not be redefined by the subclasses.
The arguments to this function, for any given subclass, will be the same as the arguments to the interaction function.
- store_detector_geometry(detector_geometry, dtype=torch.float32)
Registers the information in a detector geometry dictionary
Information about the detector geometry is passed in as a dictionary, but we want the various properties to be registered as buffers in the model. This has nice effects, for example automatically updating with model.to, and making it possible to automatically save them out.
- Parameters:
detector_geometry (dict) – A dictionary containing at least the two entries ‘distance’ and ‘basis’
dtype (torch.dtype, default: torch.float32) – The datatype to convert the values to before registering
- get_detector_geometry()
Makes a detector geometry dictionary from the registered buffers
This extracts a dictionary with the detector geometry data from the registered buffers, helpful for functions which expect the geometry data to be in this format.
- Returns:
detector_geometry – A dictionary containing at least the two entries ‘distance’ and ‘basis’, pulled from the model’s buffers
- Return type:
dict
- save_results()
A convenience function to get the state dict as numpy arrays
This function exists for two reasons, even though it is just a thin wrapper on top of t.module.state_dict(). First, because the model parameters for Automatic Differentiation ptychography and related CDI methods are the results, it’s nice to explicitly recognize the role of extracting the state_dict as saving the results of the reconstruction
Second, because display, further processing, long-term storage, etc. are often done with dictionaries of numpy arrays. So, it’s useful to have a convenience function which does that conversion automatically.
- Returns:
results – A dictionary containing all the parameters and buffers of the model, i.e. the result of self.state_dict(), converted to numpy.
- Return type:
dict
- save_to_h5(filename, *args)
Saves the results to a .mat file
- Parameters:
filename (str) – The filename to save under
*args – Accepts any additional args that model.save_results needs, for this model
- save_on_exit(filename, *args, exception_filename=None)
Saves the results of the model when the context is exited
If you wrap the main body of your code in this context manager, it will either save the results to a .h5 file upon completion, or when any exception is raised during execution.
- Parameters:
filename (str) – The filename to save under, upon completion
*args – Accepts any additional args that model.save_results needs, for this model
exception_filename (str) – Optional, a separate filename to use if an exception is raised during execution. Default is equal to filename
- save_on_exception(filename, *args)
Saves the results of the model if an exception occurs
If you wrap the main body of your code in this context manager, it will save the results to a .h5 file if an exception is thrown. If the code completes without an exception, it will not save the results, expecting that the results are explicitly saved later
- Parameters:
filename (str) – The filename to save under, in case of an exception
*args – Accepts any additional args that model.save_results needs, for this model
- skip_computation()
Returns true if computations should be skipped due to checkpointing
This is used internally by model.AD_optimize to make the checkpointing system work, but it is also useful to suppress printing when computations are being skipped
- AD_optimize(iterations, data_loader, optimizer, scheduler=None, regularization_factor=None, thread=True, calculation_width=10)
Runs a round of reconstruction using the provided optimizer
This is the basic automatic differentiation reconstruction tool which all the other, algorithm-specific tools, use. It is a generator which yields the average loss each epoch, ending after the specified number of iterations.
By default, the computation will be run in a separate thread. This is done to enable live plotting with matplotlib during a reconstruction. If the computation was done in the main thread, this would freeze the plots. This behavior can be turned off by setting the keyword argument ‘thread’ to False.
- Parameters:
iterations (int) – How many epochs of the algorithm to run
data_loader (torch.utils.data.DataLoader) – A data loader loading the CDataset to reconstruct
optimizer (torch.optim.Optimizer) – The optimizer to run the reconstruction with
scheduler (torch.optim.lr_scheduler._LRScheduler) – Optional, a learning rate scheduler to use
regularization_factor (float or list(float)) – Optional, if the model has a regularizer defined, the set of parameters to pass the regularizer method
thread (bool) – Default True, whether to run the computation in a separate thread to allow interaction with plots during computation
calculation_width (int) – Default 10, how many translations to pass through at once for each round of gradient accumulation. This does not affect the result, but may affect the calculation speed.
- Yields:
loss (float) – The summed loss over the latest epoch, divided by the total diffraction pattern intensity
- Adam_optimize(iterations, dataset, batch_size=15, lr=0.005, betas=(0.9, 0.999), schedule=False, amsgrad=False, subset=None, regularization_factor=None, thread=True, calculation_width=10)
Runs a round of reconstruction using the Adam optimizer
This is generally accepted to be the most robust algorithm for use with ptychography. Like all the other optimization routines, it is defined as a generator function, which yields the average loss each epoch.
- Parameters:
iterations (int) – How many epochs of the algorithm to run
dataset (CDataset) – The dataset to reconstruct against
batch_size (int) – Optional, the size of the minibatches to use
lr (float) – Optional, The learning rate (alpha) to use. Defaultis 0.005. 0.05 is typically the highest possible value with any chance of being stable
betas (tuple) – Optional, the beta_1 and beta_2 to use. Default is (0.9, 0.999).
schedule (float) – Optional, whether to use the ReduceLROnPlateau scheduler
subset (list(int) or int) – Optional, a pattern index or list of pattern indices to use
regularization_factor (float or list(float)) – Optional, if the model has a regularizer defined, the set of parameters to pass the regularizer method
thread (bool) – Default True, whether to run the computation in a separate thread to allow interaction with plots during computation
calculation_width (int) – Default 10, how many translations to pass through at once for each round of gradient accumulation. Does not affect the result, only the calculation speed
- LBFGS_optimize(iterations, dataset, lr=0.1, history_size=2, subset=None, regularization_factor=None, thread=True, calculation_width=10, line_search_fn=None)
Runs a round of reconstruction using the L-BFGS optimizer
This algorithm is often less stable that Adam, however in certain situations or geometries it can be shockingly efficient. Like all the other optimization routines, it is defined as a generator function which yields the average loss each epoch.
Note: There is no batch size, because it is a usually a bad idea to use LBFGS on anything but all the data at onece
- Parameters:
iterations (int) – How many epochs of the algorithm to run
dataset (CDataset) – The dataset to reconstruct against
lr (float) – Optional, the learning rate to use
history_size (int) – Optional, the length of the history to use.
subset (list(int) or int) – Optional, a pattern index or list of pattern indices to ues
regularization_factor (float or list(float)) – Optional, if the model has a regularizer defined, the set of parameters to pass the regularizer method
thread (bool) – Default True, whether to run the computation in a separate thread to allow interaction with plots during computation.
- SGD_optimize(iterations, dataset, batch_size=None, lr=0.01, momentum=0, dampening=0, weight_decay=0, nesterov=False, subset=None, regularization_factor=None, thread=True, calculation_width=10)
Runs a round of reconstruction using the SGD optimizer
This algorithm is often less stable that Adam, but it is simpler and is the basic workhorse of gradience descent.
- Parameters:
iterations (int) – How many epochs of the algorithm to run
dataset (CDataset) – The dataset to reconstruct against
batch_size (int) – Optional, the size of the minibatches to use
lr (float) – Optional, the learning rate to use
momentum (float) – Optional, the length of the history to use.
subset (list(int) or int) – Optional, a pattern index or list of pattern indices to use
regularization_factor (float or list(float)) – Optional, if the model has a regularizer defined, the set of parameters to pass the regularizer method
thread (bool) – Default True, whether to run the computation in a separate thread to allow interaction with plots during computation
calculation_width (int) – Default 1, how many translations to pass through at once for each round of gradient accumulation
- report()
Returns a string with info about the latest reconstruction iteration
- Returns:
report – A string with basic info on the latest iteration
- Return type:
str
- inspect(dataset=None, update=True)
Plots all the plots defined in the model’s plot_list attribute
If update is set to True, it will update any previously plotted set of plots, if one exists, and then redraw them. Otherwise, it will plot a new set, and any subsequent updates will update the new set
Optionally, a dataset can be passed, which will allow plotting of any registered plots which need to incorporate some information from the dataset (such as geometry or a comparison with measured data).
Plots can be registered in any subclass by defining the plot_list attribute. This should be a list of tuples in the following format: ( ‘Plot Title’, function_to_generate_plot(self), function_to_determine_whether_to_plot(self))
Where the third element in the tuple (a function that returns True if the plot is relevant) is not required.
- Parameters:
dataset (CDataset) – Optional, a dataset matched to the model type
update (bool, default: True) – Whether to update existing plots or plot new ones
- save_figures(prefix='', extension='.pdf')
Saves all currently open inspection figures.
Note that this function is not very intelligent - so, for example, if multiple probe modes are being reconstructed and the probe plotting function allows one to scroll between different modes, it will simply save whichever mode happens to be showing at the moment. Therefore, this should not be treated as a good way of saving out the full state of the reconstruction.
By default, the files will be named by the figure titles as defined in the plot_list. Files can be saved with any extension suported by matplotlib.pyplot.savefig.
- Parameters:
prefix (str) – Optional, a string to prepend to the saved figure names
extention (strategy) – Default is .eps, the file extension to save with.
- compare(dataset, logarithmic=False)
Opens a tool for comparing simulated and measured diffraction patterns
This does what it says on the tin.
Also, I am very sorry, the implementation was done while I was possessed by Beezlebub - do not try to fix this, if it breaks just kill it and start from scratch.
- Parameters:
dataset (CDataset) – A dataset containing the simulated diffraction patterns to compare against
logarithmic (bool, default: False) – Whether to plot the diffraction on a logarithmic scale
- class cdtools.models.SimplePtycho(wavelength, probe_basis, probe_guess, obj_guess, min_translation=[0, 0])
Bases:
CDIModel
A simple ptychography model to demonstrate the structure of a model
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(wavelength, probe_basis, probe_guess, obj_guess, min_translation=[0, 0])
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- save_results(dataset)
A convenience function to get the state dict as numpy arrays
This function exists for two reasons, even though it is just a thin wrapper on top of t.module.state_dict(). First, because the model parameters for Automatic Differentiation ptychography and related CDI methods are the results, it’s nice to explicitly recognize the role of extracting the state_dict as saving the results of the reconstruction
Second, because display, further processing, long-term storage, etc. are often done with dictionaries of numpy arrays. So, it’s useful to have a convenience function which does that conversion automatically.
- Returns:
results – A dictionary containing all the parameters and buffers of the model, i.e. the result of self.state_dict(), converted to numpy.
- Return type:
dict
- class cdtools.models.FancyPtycho(wavelength, detector_geometry, obj_basis, probe_guess, obj_guess, surface_normal=tensor([0., 0., 1.]), min_translation=tensor([0., 0.]), background=None, probe_basis=None, translation_offsets=None, probe_fourier_shifts=None, mask=None, weights=None, translation_scale=1, saturation=None, probe_support=None, oversampling=1, fourier_probe=False, loss='amplitude mse', units='um', simulate_probe_translation=False, simulate_finite_pixels=False, exponentiate_obj=False, phase_only=False, dtype=torch.float32, obj_view_crop=0)
Bases:
CDIModel
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(wavelength, detector_geometry, obj_basis, probe_guess, obj_guess, surface_normal=tensor([0., 0., 1.]), min_translation=tensor([0., 0.]), background=None, probe_basis=None, translation_offsets=None, probe_fourier_shifts=None, mask=None, weights=None, translation_scale=1, saturation=None, probe_support=None, oversampling=1, fourier_probe=False, loss='amplitude mse', units='um', simulate_probe_translation=False, simulate_finite_pixels=False, exponentiate_obj=False, phase_only=False, dtype=torch.float32, obj_view_crop=0)
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- center_probes(iterations=4)
Centers the probes
Note that this does not compensate for the centering by adjusting the object, so it’s a good idea to reset the object after centering the probes
- tidy_probes()
Tidies up the probes
What we want to do here is use all the information on all the probes to calculate a natural basis for the experiment, and update all the density matrices to operate in that updated basis
As a first step, we calculate the state of the light field across the full experiment, using the weight matrices and basis probes. Then, we use an SVD to update the basis probes so they form an eigenbasis of the implied density matrix for the full experiment.
Next, the weight matrices for each shot are recalculated so that the probes generated by weights * basis_probes for each shot are themselves an eigenbasis for that individual shot’s density matrix.
- save_results(dataset)
A convenience function to get the state dict as numpy arrays
This function exists for two reasons, even though it is just a thin wrapper on top of t.module.state_dict(). First, because the model parameters for Automatic Differentiation ptychography and related CDI methods are the results, it’s nice to explicitly recognize the role of extracting the state_dict as saving the results of the reconstruction
Second, because display, further processing, long-term storage, etc. are often done with dictionaries of numpy arrays. So, it’s useful to have a convenience function which does that conversion automatically.
- Returns:
results – A dictionary containing all the parameters and buffers of the model, i.e. the result of self.state_dict(), converted to numpy.
- Return type:
dict
- class cdtools.models.Bragg2DPtycho(wavelength, detector_geometry, obj_basis, probe_guess, obj_guess, min_translation=tensor([0., 0.]), probe_basis=None, median_propagation=tensor(0.), background=None, translation_offsets=None, mask=None, weights=None, translation_scale=1, saturation=None, probe_support=None, oversampling=1, propagate_probe=True, correct_tilt=True, lens=False, units='um', dtype=torch.float32, obj_view_crop=0)
Bases:
CDIModel
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(wavelength, detector_geometry, obj_basis, probe_guess, obj_guess, min_translation=tensor([0., 0.]), probe_basis=None, median_propagation=tensor(0.), background=None, translation_offsets=None, mask=None, weights=None, translation_scale=1, saturation=None, probe_support=None, oversampling=1, propagate_probe=True, correct_tilt=True, lens=False, units='um', dtype=torch.float32, obj_view_crop=0)
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- save_results(dataset)
A convenience function to get the state dict as numpy arrays
This function exists for two reasons, even though it is just a thin wrapper on top of t.module.state_dict(). First, because the model parameters for Automatic Differentiation ptychography and related CDI methods are the results, it’s nice to explicitly recognize the role of extracting the state_dict as saving the results of the reconstruction
Second, because display, further processing, long-term storage, etc. are often done with dictionaries of numpy arrays. So, it’s useful to have a convenience function which does that conversion automatically.
- Returns:
results – A dictionary containing all the parameters and buffers of the model, i.e. the result of self.state_dict(), converted to numpy.
- Return type:
dict
- class cdtools.models.Multislice2DPtycho(wavelength, detector_geometry, probe_basis, probe_guess, obj_guess, dz, nz, detector_slice=None, surface_normal=array([0., 0., 1.]), min_translation=tensor([0., 0.]), background=None, translation_offsets=None, mask=None, weights=None, translation_scale=1, saturation=None, probe_support=None, oversampling=1, bandlimit=None, subpixel=True, exponentiate_obj=True, fourier_probe=False, prevent_aliasing=True, phase_only=False, units='um')
Bases:
CDIModel
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(wavelength, detector_geometry, probe_basis, probe_guess, obj_guess, dz, nz, detector_slice=None, surface_normal=array([0., 0., 1.]), min_translation=tensor([0., 0.]), background=None, translation_offsets=None, mask=None, weights=None, translation_scale=1, saturation=None, probe_support=None, oversampling=1, bandlimit=None, subpixel=True, exponentiate_obj=True, fourier_probe=False, prevent_aliasing=True, phase_only=False, units='um')
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- to(*args, **kwargs)
Move and/or cast the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)
- to(dtype, non_blocking=False)
- to(tensor, non_blocking=False)
- to(memory_format=torch.channels_last)
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Parameters:
device (
torch.device
) – the desired device of the parameters and buffers in this moduledtype (
torch.dtype
) – the desired floating point or complex dtype of the parameters and buffers in this moduletensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (
torch.memory_format
) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- Returns:
self
- Return type:
Module
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- tidy_probes()
Tidies up the probes
What we want to do here is use all the information on all the probes to calculate a natural basis for the experiment, and update all the density matrices to operate in that updated basis
As a first step, we calculate the state of the light field across the full experiment, using the weight matrices and basis probes. Then, we use an SVD to update the basis probes so they form an eigenbasis of the implied density matrix for the full experiment.
Next, the weight matrices for each shot are recalculated so that the probes generated by weights * basis_probes for each shot are themselves an eigenbasis for that individual shot’s density matrix.
- save_results(dataset)
A convenience function to get the state dict as numpy arrays
This function exists for two reasons, even though it is just a thin wrapper on top of t.module.state_dict(). First, because the model parameters for Automatic Differentiation ptychography and related CDI methods are the results, it’s nice to explicitly recognize the role of extracting the state_dict as saving the results of the reconstruction
Second, because display, further processing, long-term storage, etc. are often done with dictionaries of numpy arrays. So, it’s useful to have a convenience function which does that conversion automatically.
- Returns:
results – A dictionary containing all the parameters and buffers of the model, i.e. the result of self.state_dict(), converted to numpy.
- Return type:
dict
- class cdtools.models.MultislicePtycho(wavelength, detector_geometry, obj_basis, probe_guess, obj_guess, interslice_propagator, surface_normal=tensor([0., 0., 1.]), min_translation=tensor([0., 0.]), background=None, probe_basis=None, translation_offsets=None, mask=None, weights=None, translation_scale=1, saturation=None, probe_support=None, oversampling=1, fourier_probe=False, loss='amplitude mse', units='um', simulate_probe_translation=False, simulate_finite_pixels=False, dtype=torch.float32, exponentiate_obj=False, obj_view_crop=0)
Bases:
CDIModel
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(wavelength, detector_geometry, obj_basis, probe_guess, obj_guess, interslice_propagator, surface_normal=tensor([0., 0., 1.]), min_translation=tensor([0., 0.]), background=None, probe_basis=None, translation_offsets=None, mask=None, weights=None, translation_scale=1, saturation=None, probe_support=None, oversampling=1, fourier_probe=False, loss='amplitude mse', units='um', simulate_probe_translation=False, simulate_finite_pixels=False, dtype=torch.float32, exponentiate_obj=False, obj_view_crop=0)
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- center_probes(iterations=4)
Centers the probes
Note that this does not compensate for the centering by adjusting the object, so it’s a good idea to reset the object after centering the probes
- tidy_probes()
Tidies up the probes
What we want to do here is use all the information on all the probes to calculate a natural basis for the experiment, and update all the density matrices to operate in that updated basis
As a first step, we calculate the state of the light field across the full experiment, using the weight matrices and basis probes. Then, we use an SVD to update the basis probes so they form an eigenbasis of the implied density matrix for the full experiment.
Next, the weight matrices for each shot are recalculated so that the probes generated by weights * basis_probes for each shot are themselves an eigenbasis for that individual shot’s density matrix.
- save_results(dataset)
A convenience function to get the state dict as numpy arrays
This function exists for two reasons, even though it is just a thin wrapper on top of t.module.state_dict(). First, because the model parameters for Automatic Differentiation ptychography and related CDI methods are the results, it’s nice to explicitly recognize the role of extracting the state_dict as saving the results of the reconstruction
Second, because display, further processing, long-term storage, etc. are often done with dictionaries of numpy arrays. So, it’s useful to have a convenience function which does that conversion automatically.
- Returns:
results – A dictionary containing all the parameters and buffers of the model, i.e. the result of self.state_dict(), converted to numpy.
- Return type:
dict
- class cdtools.models.RPI(wavelength, detector_geometry, probe_basis, probe, obj_guess, background=None, mask=None, saturation=None, obj_support=None, oversampling=1, weight_matrix=False, exponentiate_obj=False, phase_only=False, propagation_distance=0, units='um', dtype=torch.float32)
Bases:
CDIModel
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(wavelength, detector_geometry, probe_basis, probe, obj_guess, background=None, mask=None, saturation=None, obj_support=None, oversampling=1, weight_matrix=False, exponentiate_obj=False, phase_only=False, propagation_distance=0, units='um', dtype=torch.float32)
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- get_obj_shape_and_n_modes(obj_shape=None, n_modes=None)
Sets defaults for obj shape and n modes
- uniform_init(obj_shape=None, n_modes=None)
Sets a uniform object initialization
- random_init(obj_shape=None, n_modes=None)
Sets a uniform amplitude object initialization with random phase
- spectral_init(pattern, obj_shape=None, n_modes=None)
Initializes the object with a spectral method
- save_results(dataset=None)
A convenience function to get the state dict as numpy arrays
This function exists for two reasons, even though it is just a thin wrapper on top of t.module.state_dict(). First, because the model parameters for Automatic Differentiation ptychography and related CDI methods are the results, it’s nice to explicitly recognize the role of extracting the state_dict as saving the results of the reconstruction
Second, because display, further processing, long-term storage, etc. are often done with dictionaries of numpy arrays. So, it’s useful to have a convenience function which does that conversion automatically.
- Returns:
results – A dictionary containing all the parameters and buffers of the model, i.e. the result of self.state_dict(), converted to numpy.
- Return type:
dict