Speaker
Description
The International Data Center produces automatic event bulletins, which are subsequently verified by analysts. Analysis of 2014-2023 data shows that less than 50% of events reported in the first list (SEL1) were also included in the Late Event Bulletin (LEB). Reducing the number of misformed events could significantly ease analyst workloads. To address this, we introduce a method for assessing the legitimacy of proposed events. A key feature is whether or not the station should be expected to detect the event, which we model for each station utilizing data from the LEB. Scoring functions are then created from classifiers trained to determine whether an event from an automatic bulletin would pass an analyst’s review. These classifiers use features extracted by evaluating the likelihood of the model for the proposed events and their corresponding detection patterns. A classifier based on our scoring function, applied to one year of independent SEL1 test data, was able to identify 72.05% of false events while falsely flagging just 5% of legitimate ones. Additionally, for many events with low scores retained by the analysts, the scores provided valuable insights and pointed to important data corrections. These events had much higher scores in the LEB.
[email protected] |