Calibration Sequences ===================== Calibration sequences automate the process of running a sequence of dirigent calibrations. For example, in the context of CMS module QC, a number of tests must be performed on each module (the below list is simplified for this example): +-------------------------------+------------------------------+ | Test | Example ``dirigent`` command | +===============================+==============================+ | SLDO test | ``sldo_vi_curves`` | +-------------------------------+------------------------------+ | VDDD and VDDA sweep | ``voltage_trimming`` | +-------------------------------+------------------------------+ | Sensor bias IV | ``iv_curve`` | +-------------------------------+------------------------------+ | Tuning to 1.0 ke | ``calibrate_module`` | +-------------------------------+------------------------------+ | Non-radiative open bump test | ``crosstalk_method`` | +-------------------------------+------------------------------+ This can be achieved by running each of the ``dirigent`` calibrations manually, one by one. In this way, there are some benefits, such as much greater control from the user's point of view. However, a number of downsides and areas for improvement can be identified such as: a greater time taken to perform full sequences of tests; manual intervention required between each calibration to upload data to databases and run the next calibration; time required to ramp up and ramp down instruments at each step (coldbox, HV, ...). Using ``dirigent`` calibration sequences, all steps and background processes are automated by the software. These scripts organise running the actual dirigent calibrations (``iv_curve``, ``calibrate_module``, ...) sequentially, meaning no code is duplicated; orchestrate which instruments need to be sent to each calibration, allowing the calibration to control only the minimum required instruments, meaning calibrations do not need to be modified and can still be run independently; and organise and queue all data for upload to Panthera at the end of the calibration sequence (rather than starting and stopping after each calibration), meaning little to no input from the user during runtime. CMS Module QC Electrical tests ------------------------------ Calibration sequences are designed to realise the CMS Inner Tracker Module Electrical Quality Control tests that are performed at several stages of the module production. After the bare module tests, the tests are grouped into three different types, each consisting of a number of individual tests. A complete list of the tests performed in each type is given in the `QC document `_ and each type is briefly described here: 1. Assembly Test: the Assembly Test confirms all the functionality of the module that can be influenced by mechanical issues during the module assembly and wire bonding process. The module is tested at 17 C. Modules which do not fulfil the electrical requirements in the Assembly Test are excluded for all subsequent production steps. The command is:: dirigent assembly_test 2. Functional Test: the Functional Test is executed after each shipment during the module production. Especially after reception of the module from the spark protection (parylene coating), and at the module integration site after shipment from the module production/testing site to the integration site. The Functional Test excludes any physical damage to the module by testing the electrical integrity of the sensor and readout chips, and of the sensitive wire bonds. The module is tested at 17 C. Modules which do not fulfil the electrical requirements in the Functional Test are excluded for all subsequent production steps. The command is:: dirigent functional_test 3. Full Performance Test: the Full Performance Test provides the data used for the module selection. The module is tested at 17 C as well as at operational temperature of −20 C on the readout chips. The test is performed after the BURN-IN step. The command is:: dirigent full_performance Like calibrations, each calibration sequence can be run with the ``--help`` option to guide the user and explain the options e.g.:: dirigent assembly_test --help Please expand the :ref:`API Reference - module_testing ` section in the pane on the left, and open the ``sequences`` page for a list and detailed descriptions of each available calibration sequence and its function. Config files ------------ Like ``dirigent`` calibrations, calibration sequences make use of the ``dirigent.toml``, ``instruments.json`` and ``modules.json`` configuration files. There are however, a couple of things to note. **dirigent.toml** Like calibrations, calibration sequence-specific configuration values can defined by the user in a section corresponding to the calibration sequence with the same name. This looks as follows:: [assembly_test] tuning_method = 'custom_sequence' But, the user must also consider that the calibration sequence they are running, will run calibrations (``iv_curve``, ``sldo_vi_curves``, ...) that *also* get their settings from the ``dirigent.toml`` file. Therefore please consider which calibrations will be run by your calibration sequence, and consider if their settings are correctly defined in the ``dirigent.toml`` *before* running the sequence. In the above case, the ``tuning_method`` parameter is set to ``"custom_sequence"``. This means that at the tuning step, ``custom_sequence`` will be run with whatever sequence is defined in the ``dirigent.toml`` in the ``[custom_sequence]`` section (or if nothing is provided, the default will be run). **instruments.json** Unlike calibrations, calibration sequences actually *require* certain instruments to be defined. While this is not the preferred dirigent practise, calibration sequences were created for a specific purpose: CMS Inner Tracker Module QC. This includes use of the ``PSIColdbox`` ICICLE instrument class, and performing I-V measurements, which requires HV, etc. The instruments required for each calibration sequence are listed below: * ``assembly_test``: ``"lv"``, ``"hv"``, and ``"ab"`` * ``functional_test``: ``"lv"``, and ``"hv"`` * ``full_performance``: ``"lv"``, and ``"hv"`` **modules.json** The ``modules.json`` file behaves exactly the same as for regular calibrations and is only included here for completeness, and to remove abiguity. Please use as normal. Script workflow --------------- This section provides greater detail about how calibration sequences work and was intended more for developers and interested parties, rather than casual users. It is a good idea to read it to understand how calibrations sequences work, but feel free to skip this section. Each calibration sequence are created as scripts, the same as regular calibrations, and will therefore instantialise ``dirigent`` as usual. The only difference between a calibration sequence script is the use of the ``run_sequence()`` function, that will create an instance of the ``CalibrationSequence`` class. More about this class can be found in the API reference. This class handles everything for the sequence to prepare the ``dirigent`` instance for the list of calibrations (``dirigent`` subcommands) that it will run, including anticipating the data that will be created, names of folders, test names for uploading to Panthera, turning on the ``PSIColdbox`` ICICLE instrument defined in ``instruments.json``, and managing the other instruments etc. Calibrations in the sequence are run using the ``subprocess`` Python module. The main reason for this was time constraint and after experimenting with some other methods (such as ``click``'s ``invoke()`` or ``callback()`` functions), I decided to stick with ``subprocess`` as I can make use of ``dirigent``'s command-line flexibility and easily add functionality, should I need to. What this means in the background is that the ``assembly_test`` script (for example), will of course initialise ``dirigent``. Then, for each calibration that is run, new instances of ``dirigent`` are created and destroyed, and then the ``assembly_test`` instance of ``dirigent``'s exit is run. This leads to a pattern of ``dirigent`` init, calibration 1 init, calibration 1 exit, calibration 2 init, calibration 2 exit, ..., ``dirigent`` exit i.e. having a double ``dirigent`` init at the beginning and double exit at the end (since each calibration is just another ``dirigent`` instance). This caused lots of problems with the way ``dirigent`` worked. For this, I had to add in what one might call 'workarounds' rather than 'solutions'. Some of these are detailed below, which is most easily described using an example. The ``CalibrationSequence`` class prepares lists of strings forming the ``dirigent`` subcommands (calibrations) that are run (users can also run these commands in command-line). Using ``iv_curve`` as an example, the ``assembly_test`` script will run:: dirigent --no-monitoring --instruments-dict --channels-dict -p -l -m -D iv_curve -w -t To explain each part: * ``--no-monitoring``: Monitoring is switched off, since the main ``dirigent`` instantiation performs the monitoring. * ``--instruments-dict``: As mentioned before, the scripts will orchestrate which instruments are passed to each calibration in the sequence. In this case, the coldbox can be removed from the dictionary string ````, meaning this instance of ``iv_curve`` will not be able to attempt to control it, since the main ``dirigent`` instantiation has already turned it on. * ``--channels-dict``: Same as above. * ````: This is a JSON-formatted string that should contain the same information as the ``"instruments"`` dictionary in the ``instruments.json`` file e.g. ``"{"ab_1"}: {"class": "ETHProbecard", "resource": "ASRL/dev/ttyUSB0::INSTR"}"``. On initialisation, ``dirigent`` converts it into the corresponding Python object, a dictionary. * ````: Same as above, but with the ``"channels"`` dictionary in the ``instruments.json`` file e.g. ``"{"0": {"ab": {"instrument": "ab_1"}}}"``. * ``-p``: Disables this instance of ``iv_curve`` from uploading to Panthera (and therefore asking for a username and password after each calibration), since the main ``dirigent`` instance will handle this. Note that Felis is still allowed (and indeed, should!) create its files. * ``-l``: Stops the ``dirigent`` logo from printing at each calibration step. * ``-m``: Disables modification of Ph2 ACF logging file, since this is performed by the main ``dirigent`` instance. * ``-D``: Overrides the default directory to store results which for ``iv_curve`` would be ``Results_dirigent/iv_curve/``. Each calibration sequence will store individual calibration results nested in a folder named after itself e.g. ``Results_dirigent/assembly_test/00_iv_curve/`` * ``iv_curve``: Run the ``iv_curve`` calibration. * ``-t``: Enable temperature monitoring. This is specific to the ``iv_curve`` calibration. Other calibrations may use other flags. * ``-w``: Write the calculated breakdown voltages to a file. Specific to the ``iv_curve`` calibration As I touched on briefly, for the rest of the calibration sequence none of the calibrations will receive the coldbox instrument. This means they will not be able to attempt to control it, since the default behaviour of each calibration is to turn the coldbox on and off. For IV-like calibrations (e.g. ``iv_curve``), only the defined HV and LV instruments (in the ``instruments.json``) will be passed to the calibration. For SLDO-like calibrations (e.g. ``sldo_vi_curves``, ``sldo_vi_curves_GADC``), calibration sequence scripts will pass only the LV and if needed, the probe card instrument(s), since HV is not required etc. The main benefit for doing things this way is that individual calibration behaviour does not need to be modified to be compatible with calibrations sequences. They can be written (and modified) without calibration sequences in mind, and can be run individually as they always have been. Users can still perform separate I-V, or module tuning calibrations. To continue using ``iv_curve`` as an example, another feature to mention is the breakdown analysis. Calibration sequences will analyse the breakdown voltages from ``iv_curve`` and determine whether it is safe to apply ``"default_voltage"`` in ``instruments.json`` to each of the modules in the system. If it is, it will turn power LV, then ramp up HV to ``"default_voltage"``, then it will not pass those instruments to the tuning sequence (``calibrate_module``, ``custom_calibration``, etc.), nor the open bumps test (``crosstalk_method``), and run them sequentially. This means that the HV instrument does not need to be ramped up, then ramped down for each calibraiton. Again, the benefit here is that the individual calibrations have not been modified and can still be run by users as they always have, and the main ``dirigent`` instance still has control of the instruments, meaning saftey features are still engaged. Finally, if ``felis=true`` in the ``dirigent.toml`` file, calibration sequences will allow calibrations to create the necessary files for Felis and hand them over, but will not upload them until the end of the script. The main ``dirigent`` instance then searches for the files that Felis created during each calibration, and gives them to its own Felis instance, which is then allowed to upload to Panthera. Doing things this way has many benefits. Firstly calibrations should already create and handle all the files they need for uploading to Panthera. Calibration sequences simply exploit this meaning no extra code need be maintained. Also, by not allowing individual calibrations to upload to Panthera, the calibration sequence is not interrupted and can continue automatically, asking the user for a username and password only at the end of the sequence. .. Integration with the central repo --------------------------------- The final feature to mention is the calibration sequence aptly named ``calibration_sequence``. This is designed to take an input using the ``--sequence`` (``-s``) flag, that is a string containing the inner-tracker-tests repo sequence name (such as ``"TFPX_FullPerformance_Test"``). It will then attempt to dynamically run through the sequence of tests given, rather than utilising a far more simple pre-defined sequence as with ``assembly_test``, ``functional_test``, and ``full_performance``. The lists of tests in the central "inner-tracker-tests" repo (known as "sequences"), reside in the ``TestSequences.py`` file, in the ``CompositeTests`` dictionary. In the past, these sequences (Python lists) only included Ph2 ACF-like scans (e.g. ``pixelalive``, ``scurve``), that each used specific settings given in the ``HWSettings.py`` file. The result, are so-called "tests". One simple example is ``"PixelAlive_Analog"``, which is a Ph2 ACF ``pixelalive`` "scan" with the ``INJtype`` setting = ``"1"`` (among many others). A ``dirigent`` user could automate running through these sequences using ``calibrate_module`` by providing the name of the sequence. However, more recently the sequences in the repo can now include a combination of Ph2 ACF-like (``"PixelAlive_Analog"``, ``"SCurveScan"``, ...) and non-Ph2 ACF tests. Some examples are ``"IVCurve_High"``, and ``"SLDOScan"``. The ``dirigent`` calibration ``calibrate_module`` does not know how to properly handle these strings and will of course, not perform I-V or SLDO measurements. This is why calibration sequences and in fact, the ``calibration_sequence`` script was created. ``calibration_sequence`` takes the list and tries to interpret it in a way that ``dirigent`` understands. For example, if the list were:: test_sequence = [ "SLDOScan_GADC", "TrimbitScan_GADC", "IVCurve", "PixelAlive_Analog", "ThresholdEqualization", "ThresholdAdjustment_2000", "ThresholdEqualization_1Step_2000", "NoiseScan_2000", "PixelAlive_Analog_2000", "SCurveScan_1100", "ThresholdAdjustment_1200", "ThresholdEqualization_1Step_1200", "NoiseScan_1200", "PixelAlive_Analog_1200", "SCurveScan", "crosstalk_analyze", "ThresholdAdjustment_1000", "ThresholdEqualization_1Step_1000", "PixelAlive_Analog_1000", "SCurveScan_800", ]