flux

 FLUX NF NW (WEX FUDGE (NFL NFS SET) or WFAC ) FDAT [BWIDTH] 
      NSPLINE RLO RHI NREJ FLIN SOFT

 Method:

 FLUX prepares flux calibration data for later application to objects.
 The object is to get conversion factors so that from a number of
 counts/sec at a particular pixel, the flux can be computed. FLUX does
 this by taking the ratio of an observation of a flux standard to
 tabulated data. The counts are left the same while the flux is adjusted
 to give the same flux as the tabulated data, with the flux/counts ratio
 obtained by spline interpolation of the individual points.

 Telluric absorption lines:

 With this spectrum one can then flux calibrate spectra with FCAL which
 applies extinction corrections at the same time. Normally FLUX does not 
 need to bother with continuum extinction correction since the latter is 
 smooth with wavelength. However, what if your data is affected by telluric
 absorption lines which are sharp features? The first thing to say is, if 
 you can avoid using such regions, then that is what you should do. In this
 case eliminate any tabulated data points in the region affected, and hope 
 that the spline interpolation can hop over these regions. However, if you are
 desparate AND the tabulated data has been corrected (e.g. with a model
 atmosphere) then FLUX allows you to remove the telluric lines as far as
 possible.

   To use this feature you must have a spectrum of a featureless object, 
 normalised to 1 (and then set exactly equal to 1 to reduce noise) outside 
 the telluric line regions. This is best done by fitting a spline with 
 SFIT. The absorption lines may be saturated and their strength will then 
 not be linearly proportional to airmass in general (in contrast to the 
 continuum absorption). At low and intermediate resolution this can be
 approximately handled by (a) using a power law scaling or (b) trying to
 minimise scatter in a well defined telluric region. The latter can be good 
 if you cover the strong 7600A absorption feature, but are interested in
 fluxes in weaker features (the 7600 region can hardly ever be reliably
 corrected for accurate work). At the moment molly does not include any
 automatic method for correcting high res data (i.e. where the lines are
 well resolved) affected by telluric lines. In such cases the strength
 scales with airmass in a different way for each pixel and probably a
 close by standard is the best method for getting rid of the features.

 Tabulated data:

 Tabulated data comes in 2 forms. First there are the Oke type tables of
 fluxes measured over short intervals away from absorption lines. These
 trade off statistical noise in favour of reduced systematics since only
 a small fraction of the spectrum is actually used, but it the most
 reliable fraction. If your standards are bright it may be the method you
 want.

 Alternatively there is the Fillipenko-Greenstein form where the fluxes
 they give result from a spline fit to the continuum avoiding lines. This
 uses more of the spectrum and is therefore more open to systematic
 effects, but is better for faint standards.

 FLUX distinguishes between these according to whether there is a third
 column indicating band-pass (if yes then it is Oke). For Oke type data, 
 counts are measured in the same intervals as tabulated and the ratio of the 
 standard/observed is taken. This is then fitted with a spline to give the 
 flux calibration. 

 For Fillipenko data, the table values are interpolated (linearly) onto the 
 same wavelength scale as the observed spectrum. This obviously requires
 the tabulated data to be reasonably finely spaced, although the later
 spline fit will correct for some roughness. The spectrum is then blocked
 ratios are taken and a spline fitted as for Oke data. If the tabulated
 data does not cover the data range input you will be warned and the data
 will not be used.

 See below for more about the format of the flux file and how you can 
 interpolate extra points since small numbers of points often cause
 difficulty.
 
 Parameters:

    NF -- Slot with flux star (wide slit spectrum)

    NW -- Slot with telluric absorption calibrations spectrum (called water
          for short). 0 if you don't need to correct for it.
          If you do, to prepare such a star ideally you require an
          intrinsically featureless spectrum observed with the same width
          slit as your targets. Divide this through by a spline fit to
          regions unaffected by telluric absorption and set these regions
          equal to 1 with CSET. 

    If you need a water star then you also need to specify:

    WEX -- Exponent for scaling telluric absorption spectrum from one airmass 
           to another according to the equation A' = A*(AM_O/AM_W)**WEX where 
           A' is the aborption in mags for a target and A is the calibration 
           data, AM_O and AM_W are the airmasses of object and standard and 
           WEX is the exponent. If the telluric absorption was linear, WEX 
           would equal 1, but normally the lines are saturated and WEX=0.6 is 
           closer to the mark. 

    FUDGE - Rather than using the above scaling, the correction can be made
            by minimising the scatter in the absorption regions. The scatter 
            is judged by correcting the spectrum, smoothing a little to account
            for shifts between standard and target which impose a spurious 
            scatter (NFS smallscale filter width), and then taking difference 
            with the same spectrum smoothed alot (NFL largescale width). This 
            is then iterated to minimise scatter.

     NFS -  Small scale smoothing. Enough to account for shifts between
            water star and target. Needs to be higher for flux star taken
            through wide slit as difference of resolution causes artificial
            scatter.

     NFL -  Large scale smoothing. Larger than typical scale of telluric
            absorption.

     SET  - You then have set a mask to tell it where telluric lines are. 
            You don't need to mask them all, just tell it where a few good 
            ones are (the one near 7600A would be an obvious choice in the
            near IR).

     WFAC - If you don't wish to fudge, you have the option of specifying a 
            ratio between the airmass used for the telluric absorption 
            (``watermass'') and the true airmass. Ideally this should = 1
            but may not if for example you use a standard from one night for
            the next night and the vapour content has changed.

    Back to parameters you always need:

     FDAT - Next need a file with flux data which should be columns of form
            WAVELENGTH (Angstroms), AB magnitudes or mJy fluxes, BANDWIDTH 
            (Angstroms) for Oke type data only. # and * are comment flags
            but there should be one line before the data starts to indicate
            the type of flux being used. The line should be of the form
            Fluxes=AB or Fluxes=mJy. The tables should increase in
            wavelength monotonically. I tend to favour AB mags because I
            think that they interpolate better.

            Often you find that you don't have enough , in which case you can 
            put entries such as 8300 -100. 40. and then the routine will 
            interpolate a flux at that wavelength. Obviously you have to be 
            careful with this, in particular avoid lines. I use this quite
            extensively because otherwise I find the fit does not follow the
            true variations. This is particularly the case at high
            dispersion.

      BWIDTH - If the data is identified as being Fillipenko type you
            have to enter a width in pixels for forming data points. 
            1 for high S/N data, but more if you don't have enough
            photons.

      NSPLINE, RLO, RHI, NREJ -- Parameters for spline fit to flux data, but
            you will be allowed to adjust the number and positions of spline
            knots interactively as flux data can often prove difficult for
            automatic setting of knots. 

      FLIN -- Fit to linear data (else log). Large dynamic
              range data sometimes better if fitted with log 

      SOFT -- Number to ``soften'' weights for fit. The variances
              become V' = SOFT*VBAR+(1-SOFT)*V where V are the
              original variances and VBAR their mean. SOFT = 1
              gives unit weights


Related command: fcal

This command belongs to the class: calibration


Tom Marsh, Warwick