Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/296444 
Erscheinungsjahr: 
2023
Quellenangabe: 
[Journal:] Theoretical Economics [ISSN:] 1555-7561 [Volume:] 18 [Issue:] 4 [Year:] 2023 [Pages:] 1441-1474
Verlag: 
The Econometric Society, New Haven, CT
Zusammenfassung: 
In order to identify expertise, forecasters should not be tested by their calibration score, which can always be made arbitrarily small, but rather by their Brier score. The Brier score is the sum of the calibration score and the refinement score; the latter measures how good the sorting into bins with the same forecast is, and thus attests to “expertise.” This raises the question of whether one can gain calibration without losing expertise, which we refer to as “calibeating.” We provide an easy way to calibeat any forecast, by a deterministic online procedure. We moreover show that calibeating can be achieved by a stochastic procedure that is itself calibrated, and then extend the results to simultaneously calibeating multiple procedures, and to deterministic procedures that are continuously calibrated.
Schlagwörter: 
Brier score
calibeating
Calibrated forecasts
calibration score
experts
refinement score
JEL: 
C7
D8
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by-nc Logo
Dokumentart: 
Article

Datei(en):
Datei
Größe
383.3 kB





Publikationen in EconStor sind urheberrechtlich geschützt.