This is a collection of questions of common interest which will be extended following
your reports and proposals.
The applied cuts are listed in the Info.txt file shipped with the data package.
Open betas are released for two purposes. First of demonstrating a product to potential consumers, and second testing among a wide user base suitable to uncover errors that a few number of testers might not find.
Beta test phase means that all features provided are ok but the product is not 'feature complete'. We will permanently extend our offers and most of all we want to listen and respond to our users.
The user selections are kept roughly 2 weeks after the expiring date. Then the selections will be removed by a cron job.
You can always use the 'resubmit' button on the 'review page' to get the same data sample unless you have deleted the request in the 'review page'.
The number of one dimensional quantities provided the user is at present 24. If the arrays EDeposit, MDeposit, GDeposit and the Arrival Times of KASCADE and GRANDE would be included in an ASCII file it would blow up the file size by roughly a factor of 500! I.e. 390 Million Events (rows) with 3000 values (columns) each . In root and HDF5 it can be put in structures to save space.
In the former KCDC realese MERIDIAN we published the particle densities/station, nornmalized to tracklength and effective detector area which is different for the two types of KASCADE array detector stations.
We decided to switch to energy depoosits because we think that it makes more sense in this special case to publish a measured value than a corrected one. This offers the users the unique possibility to build their own Lateral Energy Correction Function (LECF) to recalculate the densities/station by just knowing the angle of incidence (Ze) and the effective detector area besides the deposited energy.
The energy estimation for the KASCADE detector component is based on the measurement of the electromagnetic and the muonic components separately. This method does not work for GRANDE as we do not have an independent muon measurement. Thus the estimation of the primary energy would be too inaccurate.
We decided to reperform the whole time calibration for the KASCADE Array to increase the precision of the time dependent quantities.
This is a very time- and CPU-consuming procedure which could not have been handled 10 years ago with the computer generation running at that time.
When you use KCDC with different accounts please make sure to logoff and close your browser before switching the account.
Otherwise your identity will not be handled correctly.
The data structure of the KASCADE raw data is rather complex and depends on a lot of external information. Geometry and calibration are time-dependent stored in CERN HEPDB databases. Some correction files like air temperature and air pressure are stored as well in separate files. The reconstruction program KRETA uses all these information to perform a calibration and apply corrections on an event-by-event basis. The data are then stored in so called ntuples which are partly published here.
We offer in MERIDIAN and in NABOO about 11% less data than in the former releases WOLF 359 and VULCAN. This reflects the fact that all KASCADE publications are based on data sets with the same trigger conditions. Earlier runs (between 282 and 876) were recorded with a higher trigger threshold resulting in a frequency of about 2Hz. From 8.5.1998 (run 877) we lowered the threshold to roughly double the data rate. All subsequent runs up to the last KASCADE run 7417 are recorded under the same trigger conditions. In order to ensure as far as possible a constant data quality for the whole data sample, we decided to reduce the data set offered.
The quantity
Energy is derived from the formula described in the
KCDC-Manual, calculated from the quantities Ne
e, N
mu, and the zenith angle (ZE). The parameters given are based on simulations using the high energy simulation model
QGSjet-II-2.
The formula is valid above log E
o=14.8 (log N
e=4) where the detector has full trigger efficiency and below ZE=42°.
The estimated energy should not be used to apply cuts on data sets. For cuts it is strongly recommended to use the quantities N
e or N
mu or a combination of both respectively.
In case you are interested in data sets only when Lopes has a valid reconstruction you have to apply a cut on the "LOPES Comp ID". This so call 'Lopse Component Identifier' is set to '1' when Lopes data are present in the event otherwise it doesn't exist. So a cut from '1' to '1' in "LOPES Comp ID" would do the job.
There is no fixed maintenance date. If possible downtimes will be announced in time p.e. for release change.
It is advisable to use the ‘row_mapping’ file to match events from different detector components like ‘array’, ‘grande’ and ‘calorimeter’. When a detector component is missing in the respective event the row_mapping entry is set to ‘-1’.
The event counter for this component was not increased. The row_mapping information is always provided the user with the data sets.
As an example how to use row_mapping a zipped C++ program fragment can be downloaded via
https-download.
When filling the mongoDB, the quantity 'microtime' was named 'M'. This means that the data sets which are made available to the users have these variable names. We will fix this inconsistency the next time the mongoDB is completely refilled.
KAOS is the acronym for
Karlsruhe
Astroparticlephysics
Open
data
Software. It has been written in the context of the KASCADE Cosmic Ray
Data Centre (
KCDC), a web
portal designed for the publication of scientific data recorded with the
KASCADE experiment.
KAOS is implemented using a plugin based design with a focus on easy extensibility
and modifiability in order to work also outside the context of KCDC.