05 Oct 2004 03:03:50 UTC
- Distribution: Decision-Markov
- Module version: 0.03
- Source (raw)
- Browse (raw)
- How to Contribute
- Testers (3 / 0 / 0)
- KwaliteeBus factor: 0
- License: unknown
- Activity24 month
- Download (7.84KB)
- MetaCPAN Explorer
- Subscribe to distribution
- This version
- Latest version
Decision::Markov - Markov models for decision analysis
use Decision::Markov; $model = new Decision::Markov; $state = $model->AddState("Name",$utility); $error = $model->AddPath($state1,$state2,$probability); $error = $model->Check $model->Reset([$starting_state,[$number_of_patients]]); $error = $model->StartingState($starting_state[,$number_of_patients]); $model->DiscountRate($rate); ($utility,$cycles) = $model->EvalMC(); $state = $model->EvalMCStep($cycle); ($utility,$cycles) = $model->EvalCoh(); $patients_left = $model->EvalCohStep($cycle); $model->PrintCycle($FH,$cycle); $model->PrintMatrix($FH);
This module provides functions used to built and evaluate Markov models for use in decision analysis. A Markov model consists of a set of states, each with an associated utility, and links between states representing the probability of moving from one node to the next. Nodes typically include links to themselves. Utilities and probabilities may be fixed or may be functions of the time in cycles since the model began running.
Create a new Markov model.
Add a state to the model. The arguments are a string describing the state and the utility of the state. The utility may be specified either as a number or as a reference to a subroutine that returns the utility. The subroutine will be passed the current cycle number as an argument. Returns the new state, which is an object of class Decision::Markov::State.
Adds a path between two states. The arguments are the source state, the destination state, and the probability of transition.
Probability may be specified either as a number or as a reference to a subroutine that returns the probability. The subroutine will be passed the current cycle number as an argument.
AddPath returns undef if successful, error message otherwise.
Checks all states in the model to include that the probabilities of the paths from each state sum to 1. Returns undef if the model checks out, error message otherwise.
Resets the model. Use before evaluating the model.
Sets the state in which patients start when the model is evaluated. The optional second argument sets the number of patients in a cohort when performing a cohort simulation.
Returns undef if successful or an error message.
Sets the per-cycle discount rate for utility. By default, there is no discounting. To set, for example, 3%/cycle discounting, use $model->DiscountRate(.03);
If no discount rate is given, returns the current discount rate.
Performs a Monte Carlo simulation of a single patient through the model, and returns that patient's cumulative utility and the number of cycles the model ran. The patient begins in the state set by StartingState.
Given the current model cycle, evaluates a single step of the Markov model, and returns the patient's new state. Internally continues to track the patient's cumulative utility.
Performs a cohort simulation of the model and returns the average cumulative utility of a patient in the cohort, and the number of cycles the model ran. The number of patients and their initial state are set with StartingState.
Evaluates a single cycle of a cohort simulation. Returns the number of patients who will change states in the next cycle (i.e., if it returns 0, you're at the end of the model run).
Given a FileHandle object and the cycle, prints the current distribution of patients in the cohort (if a cohort simulation is in progress) or the current state and utility of the patient (if a Monte Carlo simulation is in progress).
Given a FileHandle object, prints the model in transition matrix form
Sonnenberg, F. A. & Beck, J. R. (1993). Markov Models in Medical Decision Making: A Practical Guide. Med. Dec. Making, 13: 322-338.
Copyright (c) 1998 Alan Schwartz <email@example.com>. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Module Install Instructions
To install Decision::Markov, copy and paste the appropriate command in to your terminal.
perl -MCPAN -e shell install Decision::Markov
For more information on module installation, please visit the detailed CPAN module installation guide.