Title: | Group Sequential Design with Non-Constant Effect |
Version: | 1.1.5 |
Description: | The goal of 'gsDesign2' is to enable fixed or group sequential design under non-proportional hazards. To enable highly flexible enrollment, time-to-event and time-to-dropout assumptions, 'gsDesign2' offers piecewise constant enrollment, failure rates, and dropout rates for a stratified population. This package includes three methods for designs: average hazard ratio, weighted logrank tests in Yung and Liu (2019) <doi:10.1111/biom.13196>, and MaxCombo tests. Substantial flexibility on top of what is in the 'gsDesign' package is intended for selecting boundaries. |
License: | GPL-3 |
URL: | https://merck.github.io/gsDesign2/, https://github.com/Merck/gsDesign2 |
BugReports: | https://github.com/Merck/gsDesign2/issues |
Encoding: | UTF-8 |
Depends: | R (≥ 3.5.0) |
Imports: | corpcor, data.table, dplyr, gsDesign, gt, methods, mvtnorm, npsurvSS (≥ 1.1.0), r2rtf, stats, survival, tibble, tidyr, utils, Rcpp |
Suggests: | covr, ggplot2, kableExtra, knitr, rmarkdown, simtrial, testthat (≥ 3.0.0) |
VignetteBuilder: | knitr |
LinkingTo: | Rcpp |
RoxygenNote: | 7.3.2 |
NeedsCompilation: | yes |
Packaged: | 2025-06-27 21:25:39 UTC; zhayuji4 |
Author: | Keaven Anderson [aut], Yujie Zhao [aut, cre], Yilong Zhang [aut], John Blischak [aut], Yihui Xie [aut], Nan Xiao [aut], Jianxiao Yang [aut], Amin Shirazi [ctb], Ruixue Wang [ctb], Yi Cui [ctb], Ping Yang [ctb], Xin Tong Li [ctb], Chenxiang Li [ctb], Hiroaki Fukuda [ctb], Hongtao Zhang [ctb], Yalin Zhu [ctb], Shiyu Zhang [ctb], Dickson Wanjau [ctb], Merck & Co., Inc., Rahway, NJ, USA and its affiliates [cph] |
Maintainer: | Yujie Zhao <yujie.zhao@merck.com> |
Repository: | CRAN |
Date/Publication: | 2025-06-27 21:40:02 UTC |
gsDesign2: Group Sequential Design with Non-Constant Effect
Description
The goal of 'gsDesign2' is to enable fixed or group sequential design under non-proportional hazards. To enable highly flexible enrollment, time-to-event and time-to-dropout assumptions, 'gsDesign2' offers piecewise constant enrollment, failure rates, and dropout rates for a stratified population. This package includes three methods for designs: average hazard ratio, weighted logrank tests in Yung and Liu (2019) doi:10.1111/biom.13196, and MaxCombo tests. Substantial flexibility on top of what is in the 'gsDesign' package is intended for selecting boundaries.
Author(s)
Maintainer: Yujie Zhao yujie.zhao@merck.com
Authors:
Keaven Anderson keaven_anderson@merck.com
Yilong Zhang elong0527@gmail.com
John Blischak jdblischak@gmail.com
Yihui Xie yihui.xie@merck.com
Nan Xiao nan.xiao1@merck.com
Jianxiao Yang yangjx@ucla.edu
Other contributors:
Amin Shirazi ashirazist@gmail.com [contributor]
Ruixue Wang ruixue.wang@merck.com [contributor]
Yi Cui yi.cui@merck.com [contributor]
Ping Yang ping.yang1@merck.com [contributor]
Xin Tong Li xin.tong.li@merck.com [contributor]
Chenxiang Li chenxiang.li@merck.com [contributor]
Hiroaki Fukuda hiroaki.fukuda@merck.com [contributor]
Hongtao Zhang hongtao.zhang1@merck.com [contributor]
Yalin Zhu yalin.zhu@outlook.com [contributor]
Shiyu Zhang shiyu.zhang@merck.com [contributor]
Dickson Wanjau dickson.wanjau@merck.com [contributor]
Merck & Co., Inc., Rahway, NJ, USA and its affiliates [copyright holder]
See Also
Useful links:
Report bugs at https://github.com/Merck/gsDesign2/issues
Average hazard ratio under non-proportional hazards
Description
Provides a geometric average hazard ratio under various non-proportional hazards assumptions for either single or multiple strata studies. The piecewise exponential distribution allows a simple method to specify a distribution and enrollment pattern where the enrollment, failure and dropout rates changes over time.
Usage
ahr(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = define_fail_rate(duration = c(3, 100), fail_rate = log(2)/c(9, 18), hr =
c(0.9, 0.6), dropout_rate = 0.001),
total_duration = 30,
ratio = 1
)
Arguments
enroll_rate |
An |
fail_rate |
A |
total_duration |
Total follow-up from start of enrollment to data cutoff; this can be a single value or a vector of positive numbers. |
ratio |
Ratio of experimental to control randomization. |
Value
A data frame with time
(from total_duration
),
ahr
(average hazard ratio), n
(sample size), event
(expected number of events),
info
(information under given scenarios), and
info0
(information under related null hypothesis) for each value of
total_duration
input.
Specification
The contents of this section are shown in PDF user manual only.
Examples
# Example 1: default
ahr()
# Example 2: default with multiple analysis times (varying total_duration)
ahr(total_duration = c(15, 30))
# Example 3: stratified population
enroll_rate <- define_enroll_rate(
stratum = c(rep("Low", 2), rep("High", 3)),
duration = c(2, 10, 4, 4, 8),
rate = c(5, 10, 0, 3, 6)
)
fail_rate <- define_fail_rate(
stratum = c(rep("Low", 2), rep("High", 2)),
duration = c(1, Inf, 1, Inf),
fail_rate = c(.1, .2, .3, .4),
dropout_rate = .001,
hr = c(.9, .75, .8, .6)
)
ahr(enroll_rate = enroll_rate, fail_rate = fail_rate, total_duration = c(15, 30))
Blinded estimation of average hazard ratio
Description
Based on blinded data and assumed hazard ratios in different intervals, compute a blinded estimate of average hazard ratio (AHR) and corresponding estimate of statistical information. This function is intended for use in computing futility bounds based on spending assuming the input hazard ratio (hr) values for intervals specified here.
Usage
ahr_blinded(
surv = survival::Surv(time = simtrial::ex1_delayed_effect$month, event =
simtrial::ex1_delayed_effect$evntd),
intervals = c(3, Inf),
hr = c(1, 0.6),
ratio = 1
)
Arguments
surv |
Input survival object (see |
intervals |
Vector containing positive values indicating interval lengths where the exponential rates are assumed. Note that a final infinite interval is added if any events occur after the final interval specified. |
hr |
Vector of hazard ratios assumed for each interval. |
ratio |
Ratio of experimental to control randomization. |
Value
A tibble
with one row containing
-
ahr
- Blinded average hazard ratio based on assumed period-specific hazard ratios input infail_rate
and observed events in the corresponding intervals. -
event
- Total observed number of events. -
info0
- Information under related null hypothesis. -
theta
- Natural parameter for group sequential design representing expected incremental drift at all analyses.
Specification
The contents of this section are shown in PDF user manual only.
Examples
ahr_blinded(
surv = survival::Surv(
time = simtrial::ex2_delayed_effect$month,
event = simtrial::ex2_delayed_effect$evntd
),
intervals = c(4, 100),
hr = c(1, .55),
ratio = 1
)
Convert summary table of a fixed or group sequential design object to a gt object
Description
Convert summary table of a fixed or group sequential design object to a gt object
Usage
as_gt(x, ...)
## S3 method for class 'fixed_design'
as_gt(x, title = NULL, footnote = NULL, ...)
## S3 method for class 'gs_design'
as_gt(
x,
title = NULL,
subtitle = NULL,
colname_spanner = "Cumulative boundary crossing probability",
colname_spannersub = c("Alternate hypothesis", "Null hypothesis"),
footnote = NULL,
display_bound = c("Efficacy", "Futility"),
display_columns = NULL,
display_inf_bound = FALSE,
...
)
Arguments
x |
A summary object of a fixed or group sequential design. |
... |
Additional arguments (not used). |
title |
A string to specify the title of the gt table. |
footnote |
A list containing |
subtitle |
A string to specify the subtitle of the gt table. |
colname_spanner |
A string to specify the spanner of the gt table. |
colname_spannersub |
A vector of strings to specify the spanner details of the gt table. |
display_bound |
A vector of strings specifying the label of the bounds.
The default is |
display_columns |
A vector of strings specifying the variables to be displayed in the summary table. |
display_inf_bound |
Logical, whether to display the +/-inf bound. |
Value
A gt_tbl
object.
Examples
# Fixed design examples ----
library(dplyr)
# Enrollment rate
enroll_rate <- define_enroll_rate(
duration = 18,
rate = 20
)
# Failure rates
fail_rate <- define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
dropout_rate = .001,
hr = c(1, .6)
)
# Study duration in months
study_duration <- 36
# Experimental / Control randomization ratio
ratio <- 1
# 1-sided Type I error
alpha <- 0.025
# Type II error (1 - power)
beta <- 0.1
# Example 1 ----
fixed_design_ahr(
alpha = alpha, power = 1 - beta,
enroll_rate = enroll_rate, fail_rate = fail_rate,
study_duration = study_duration, ratio = ratio
) %>%
summary() %>%
as_gt()
# Example 2 ----
fixed_design_fh(
alpha = alpha, power = 1 - beta,
enroll_rate = enroll_rate, fail_rate = fail_rate,
study_duration = study_duration, ratio = ratio
) %>%
summary() %>%
as_gt()
# Group sequential design examples ---
library(dplyr)
# Example 1 ----
# The default output
gs_design_ahr() %>%
summary() %>%
as_gt()
gs_power_ahr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_gt()
gs_design_wlr() %>%
summary() %>%
as_gt()
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_gt()
gs_power_combo() %>%
summary() %>%
as_gt()
gs_design_rd() %>%
summary() %>%
as_gt()
gs_power_rd() %>%
summary() %>%
as_gt()
# Example 2 ----
# Usage of title = ..., subtitle = ...
# to edit the title/subtitle
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_gt(
title = "Bound Summary",
subtitle = "from gs_power_wlr"
)
# Example 3 ----
# Usage of colname_spanner = ..., colname_spannersub = ...
# to edit the spanner and its sub-spanner
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_gt(
colname_spanner = "Cumulative probability to cross boundaries",
colname_spannersub = c("under H1", "under H0")
)
# Example 4 ----
# Usage of footnote = ...
# to edit the footnote
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_gt(
footnote = list(
content = c(
"approximate weighted hazard ratio to cross bound.",
"wAHR is the weighted AHR.",
"the crossing probability.",
"this table is generated by gs_power_wlr."
),
location = c("~wHR at bound", NA, NA, NA),
attr = c("colname", "analysis", "spanner", "title")
)
)
# Example 5 ----
# Usage of display_bound = ...
# to either show efficacy bound or futility bound, or both(default)
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_gt(display_bound = "Efficacy")
# Example 6 ----
# Usage of display_columns = ...
# to select the columns to display in the summary table
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_gt(display_columns = c("Analysis", "Bound", "Nominal p", "Z", "Probability"))
Write summary table of a fixed or group sequential design object to an RTF file
Description
Write summary table of a fixed or group sequential design object to an RTF file
Usage
as_rtf(x, ...)
## S3 method for class 'fixed_design'
as_rtf(
x,
title = NULL,
footnote = NULL,
col_rel_width = NULL,
orientation = c("portrait", "landscape"),
text_font_size = 9,
file,
...
)
## S3 method for class 'gs_design'
as_rtf(
x,
title = NULL,
subtitle = NULL,
colname_spanner = "Cumulative boundary crossing probability",
colname_spannersub = c("Alternate hypothesis", "Null hypothesis"),
footnote = NULL,
display_bound = c("Efficacy", "Futility"),
display_columns = NULL,
display_inf_bound = TRUE,
col_rel_width = NULL,
orientation = c("portrait", "landscape"),
text_font_size = 9,
file,
...
)
Arguments
x |
A summary object of a fixed or group sequential design. |
... |
Additional arguments (not used). |
title |
A string to specify the title of the RTF table. |
footnote |
A list containing |
col_rel_width |
Column relative width in a vector e.g. c(2,1,1) refers to 2:1:1. Default is NULL for equal column width. |
orientation |
Orientation in 'portrait' or 'landscape'. |
text_font_size |
Text font size. To vary text font size by column, use numeric vector with length of vector equal to number of columns displayed e.g. c(9,20,40). |
file |
File path for the output. |
subtitle |
A string to specify the subtitle of the RTF table. |
colname_spanner |
A string to specify the spanner of the RTF table. |
colname_spannersub |
A vector of strings to specify the spanner details of the RTF table. |
display_bound |
A vector of strings specifying the label of the bounds.
The default is |
display_columns |
A vector of strings specifying the variables to be displayed in the summary table. |
display_inf_bound |
Logical, whether to display the +/-inf bound. |
Value
as_rtf()
returns the input x
invisibly.
Examples
library(dplyr)
# Enrollment rate
enroll_rate <- define_enroll_rate(
duration = 18,
rate = 20
)
# Failure rates
fail_rate <- define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
dropout_rate = .001,
hr = c(1, .6)
)
# Study duration in months
study_duration <- 36
# Experimental / Control randomization ratio
ratio <- 1
# 1-sided Type I error
alpha <- 0.025
# Type II error (1 - power)
beta <- 0.1
# AHR ----
# under fixed power
x <- fixed_design_ahr(
alpha = alpha, power = 1 - beta,
enroll_rate = enroll_rate, fail_rate = fail_rate,
study_duration = study_duration, ratio = ratio
) %>% summary()
x %>% as_rtf(file = tempfile(fileext = ".rtf"))
x %>% as_rtf(title = "Fixed design", file = tempfile(fileext = ".rtf"))
x %>% as_rtf(
footnote = "Power computed with average hazard ratio method given the sample size",
file = tempfile(fileext = ".rtf")
)
x %>% as_rtf(text_font_size = 10, file = tempfile(fileext = ".rtf"))
# FH ----
# under fixed power
fixed_design_fh(
alpha = alpha, power = 1 - beta,
enroll_rate = enroll_rate, fail_rate = fail_rate,
study_duration = study_duration, ratio = ratio
) %>%
summary() %>%
as_rtf(file = tempfile(fileext = ".rtf"))
#'
# the default output
library(dplyr)
gs_design_ahr() %>%
summary() %>%
as_rtf(file = tempfile(fileext = ".rtf"))
gs_power_ahr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_rtf(file = tempfile(fileext = ".rtf"))
gs_design_wlr() %>%
summary() %>%
as_rtf(file = tempfile(fileext = ".rtf"))
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_rtf(file = tempfile(fileext = ".rtf"))
gs_power_combo() %>%
summary() %>%
as_rtf(file = tempfile(fileext = ".rtf"))
gs_design_rd() %>%
summary() %>%
as_rtf(file = tempfile(fileext = ".rtf"))
gs_power_rd() %>%
summary() %>%
as_rtf(file = tempfile(fileext = ".rtf"))
# usage of title = ..., subtitle = ...
# to edit the title/subtitle
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_rtf(
title = "Bound Summary",
subtitle = "from gs_power_wlr",
file = tempfile(fileext = ".rtf")
)
# usage of colname_spanner = ..., colname_spannersub = ...
# to edit the spanner and its sub-spanner
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_rtf(
colname_spanner = "Cumulative probability to cross boundaries",
colname_spannersub = c("under H1", "under H0"),
file = tempfile(fileext = ".rtf")
)
# usage of footnote = ...
# to edit the footnote
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_rtf(
footnote = list(
content = c(
"approximate weighted hazard ratio to cross bound.",
"wAHR is the weighted AHR.",
"the crossing probability.",
"this table is generated by gs_power_wlr."
),
location = c("~wHR at bound", NA, NA, NA),
attr = c("colname", "analysis", "spanner", "title")
),
file = tempfile(fileext = ".rtf")
)
# usage of display_bound = ...
# to either show efficacy bound or futility bound, or both(default)
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_rtf(
display_bound = "Efficacy",
file = tempfile(fileext = ".rtf")
)
# usage of display_columns = ...
# to select the columns to display in the summary table
gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>%
summary() %>%
as_rtf(
display_columns = c("Analysis", "Bound", "Nominal p", "Z", "Probability"),
file = tempfile(fileext = ".rtf")
)
Define enrollment rate
Description
Define the enrollment rate of subjects for a study as following a piecewise exponential distribution.
Usage
define_enroll_rate(duration, rate, stratum = "All")
Arguments
duration |
A numeric vector of ordered piecewise study duration interval. |
rate |
A numeric vector of enrollment rate in each |
stratum |
A character vector of stratum name. |
Details
The duration
are ordered piecewise for a duration equal to
t_i - t_{i-1}
, where 0 = t_0 < t_i < \cdots < t_M = \infty
.
The enrollment rates are defined in each duration with the same length.
For a study with multiple strata, different duration and rates can be specified in each stratum.
Value
An enroll_rate
data frame.
Examples
# Define enroll rate without stratum
define_enroll_rate(
duration = c(2, 2, 10),
rate = c(3, 6, 9)
)
# Define enroll rate with stratum
define_enroll_rate(
duration = rep(c(2, 2, 2, 18), 3),
rate = c((1:4) / 3, (1:4) / 2, (1:4) / 6),
stratum = c(array("High", 4), array("Moderate", 4), array("Low", 4))
)
Define failure rate
Description
Define subject failure rate for a study with two treatment groups. Also supports stratified designs that have different failure rates in each stratum.
Usage
define_fail_rate(duration, fail_rate, dropout_rate, hr = 1, stratum = "All")
Arguments
duration |
A numeric vector of ordered piecewise study duration interval. |
fail_rate |
A numeric vector of failure rate in each |
dropout_rate |
A numeric vector of dropout rate in each |
hr |
A numeric vector of hazard ratio between treatment and control group. |
stratum |
A character vector of stratum name. |
Details
Define the failure and dropout rate of subjects for a study as following
a piecewise exponential distribution.
The duration
are ordered piecewise for a duration equal to
t_i - t_{i-1}
, where 0 = t_0 < t_i < \cdots < t_M = \infty
.
The failure rate, dropout rate, and hazard ratio in a study duration
can be specified.
For a study with multiple strata, different duration, failure rates, dropout rates, and hazard ratios can be specified in each stratum.
Value
A fail_rate
data frame.
Examples
# Define enroll rate
define_fail_rate(
duration = c(3, 100),
fail_rate = log(2) / c(9, 18),
hr = c(.9, .6),
dropout_rate = .001
)
# Define enroll rate with stratum
define_fail_rate(
stratum = c(rep("Low", 2), rep("High", 2)),
duration = 1,
fail_rate = c(.1, .2, .3, .4),
dropout_rate = .001,
hr = c(.9, .75, .8, .6)
)
Piecewise constant expected accrual
Description
Computes the expected cumulative enrollment (accrual) given a set of piecewise constant enrollment rates and times.
Usage
expected_accrual(
time = 0:24,
enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20))
)
Arguments
time |
Times at which enrollment is to be computed. |
enroll_rate |
An |
Value
A vector with expected cumulative enrollment for the specified times
.
Specification
The contents of this section are shown in PDF user manual only.
Examples
library(tibble)
# Example 1: default
expected_accrual()
# Example 2: unstratified design
expected_accrual(
time = c(5, 10, 20),
enroll_rate = define_enroll_rate(
duration = c(3, 3, 18),
rate = c(5, 10, 20)
)
)
# Example 3: stratified design
expected_accrual(
time = c(24, 30, 40),
enroll_rate = define_enroll_rate(
stratum = c("subgroup", "complement"),
duration = c(33, 33),
rate = c(30, 30)
)
)
# Example 4: expected accrual over time
# Scenario 4.1: for the enrollment in the first 3 months,
# it is exactly 3 * 5 = 15.
expected_accrual(
time = 3,
enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20))
)
# Scenario 4.2: for the enrollment in the first 6 months,
# it is exactly 3 * 5 + 3 * 10 = 45.
expected_accrual(
time = 6,
enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20))
)
# Scenario 4.3: for the enrollment in the first 24 months,
# it is exactly 3 * 5 + 3 * 10 + 18 * 20 = 405.
expected_accrual(
time = 24,
enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20))
)
# Scenario 4.4: for the enrollment after 24 months,
# it is the same as that from the 24 months, since the enrollment is stopped.
expected_accrual(
time = 25,
enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20))
)
# Instead of compute the enrolled subjects one time point by one time point,
# we can also compute it once.
expected_accrual(
time = c(3, 6, 24, 25),
enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20))
)
Expected events observed under piecewise exponential model
Description
Computes expected events over time and by strata under the assumption of piecewise constant enrollment rates and piecewise exponential failure and censoring rates. The piecewise exponential distribution allows a simple method to specify a distribution and enrollment pattern where the enrollment, failure and dropout rates changes over time. While the main purpose may be to generate a trial that can be analyzed at a single point in time or using group sequential methods, the routine can also be used to simulate an adaptive trial design. The intent is to enable sample size calculations under non-proportional hazards assumptions for stratified populations.
Usage
expected_event(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = define_fail_rate(duration = c(3, 100), fail_rate = log(2)/c(9, 18),
dropout_rate = 0.001),
total_duration = 25,
simple = TRUE
)
Arguments
enroll_rate |
An |
fail_rate |
A |
total_duration |
Total follow-up from start of enrollment to data cutoff. |
simple |
If default ( |
Details
More periods will generally be supplied in output than those that are input. The intent is to enable expected event calculations in a tidy format to maximize flexibility for a variety of purposes.
Value
The default when simple = TRUE
is to return the total expected
number of events as a real number.
Otherwise, when simple = FALSE
, a data frame is returned with
the following variables for each period specified in fail_rate
:
-
t
: start of period. -
fail_rate
: failure rate during the period. -
event
: expected events during the period.
The records in the returned data frame correspond to the input data frame fail_rate
.
Specification
Examples
library(gsDesign2)
# Default arguments, simple output (total event count only)
expected_event()
# Event count by time period
expected_event(simple = FALSE)
# Early cutoff
expected_event(total_duration = .5)
# Single time period example
expected_event(
enroll_rate = define_enroll_rate(duration = 10, rate = 10),
fail_rate = define_fail_rate(duration = 100, fail_rate = log(2) / 6, dropout_rate = .01),
total_duration = 22,
simple = FALSE
)
# Single time period example, multiple enrollment periods
expected_event(
enroll_rate = define_enroll_rate(duration = c(5, 5), rate = c(10, 20)),
fail_rate = define_fail_rate(duration = 100, fail_rate = log(2) / 6, dropout_rate = .01),
total_duration = 22, simple = FALSE
)
Predict time at which a targeted event count is achieved
Description
expected_time()
is made to match input format with ahr()
and to solve for the
time at which the expected accumulated events is equal to an input target.
Enrollment and failure rate distributions are specified as follows.
The piecewise exponential distribution allows a simple method to specify a distribution
and enrollment pattern
where the enrollment, failure and dropout rates changes over time.
Usage
expected_time(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9) * 5),
fail_rate = define_fail_rate(stratum = "All", duration = c(3, 100), fail_rate =
log(2)/c(9, 18), hr = c(0.9, 0.6), dropout_rate = rep(0.001, 2)),
target_event = 150,
ratio = 1,
interval = c(0.01, 100)
)
Arguments
enroll_rate |
An |
fail_rate |
A |
target_event |
The targeted number of events to be achieved. |
ratio |
Experimental:Control randomization ratio. |
interval |
An interval that is presumed to include the time at which
expected event count is equal to |
Value
A data frame with Time
(computed to match events in target_event
),
AHR
(average hazard ratio), Events
(target_event
input),
info
(information under given scenarios), and info0
(information under related null hypothesis) for each value of
total_duration
input.
Specification
Examples
# Example 1 ----
# default
expected_time()
# Example 2 ----
# check that result matches a finding using AHR()
# Start by deriving an expected event count
enroll_rate <- define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9) * 5)
fail_rate <- define_fail_rate(
duration = c(3, 100),
fail_rate = log(2) / c(9, 18),
hr = c(.9, .6),
dropout_rate = .001
)
total_duration <- 20
xx <- ahr(enroll_rate, fail_rate, total_duration)
xx
# Next we check that the function confirms the timing of the final analysis.
expected_time(enroll_rate, fail_rate,
target_event = xx$event, interval = c(.5, 1.5) * xx$time
)
# Example 3 ----
# In this example, we verify `expected_time()` by `ahr()`.
x <- ahr(
enroll_rate = enroll_rate, fail_rate = fail_rate,
ratio = 1, total_duration = 20
)
cat("The number of events by 20 months is ", x$event, ".\n")
y <- expected_time(
enroll_rate = enroll_rate, fail_rate = fail_rate,
ratio = 1, target_event = x$event
)
cat("The time to get ", x$event, " is ", y$time, "months.\n")
Fixed design under non-proportional hazards
Description
Computes fixed design sample size (given power) or power (given sample size) by:
-
fixed_design_ahr()
- Average hazard ratio method. -
fixed_design_fh()
- Weighted logrank test with Fleming-Harrington weights (Farrington and Manning, 1990). -
fixed_design_mb()
- Weighted logrank test with Magirr-Burman weights. -
fixed_design_lf()
- Lachin-Foulkes method (Lachin and Foulkes, 1986). -
fixed_design_maxcombo()
- MaxCombo method. -
fixed_design_rmst()
- RMST method. -
fixed_design_milestone()
- Milestone method.
Additionally, fixed_design_rd()
provides fixed design for binary endpoint
with treatment effect measuring in risk difference.
Usage
fixed_design_ahr(
enroll_rate,
fail_rate,
alpha = 0.025,
power = NULL,
ratio = 1,
study_duration = 36,
event = NULL
)
fixed_design_fh(
alpha = 0.025,
power = NULL,
ratio = 1,
study_duration = 36,
enroll_rate,
fail_rate,
rho = 0,
gamma = 0
)
fixed_design_lf(
alpha = 0.025,
power = NULL,
ratio = 1,
study_duration = 36,
enroll_rate,
fail_rate
)
fixed_design_maxcombo(
alpha = 0.025,
power = NULL,
ratio = 1,
study_duration = 36,
enroll_rate,
fail_rate,
rho = c(0, 0, 1),
gamma = c(0, 1, 0),
tau = rep(-1, 3)
)
fixed_design_mb(
alpha = 0.025,
power = NULL,
ratio = 1,
study_duration = 36,
enroll_rate,
fail_rate,
tau = 6,
w_max = Inf
)
fixed_design_milestone(
alpha = 0.025,
power = NULL,
ratio = 1,
enroll_rate,
fail_rate,
study_duration = 36,
tau = NULL
)
fixed_design_rd(
alpha = 0.025,
power = NULL,
ratio = 1,
p_c,
p_e,
rd0 = 0,
n = NULL
)
fixed_design_rmst(
alpha = 0.025,
power = NULL,
ratio = 1,
study_duration = 36,
enroll_rate,
fail_rate,
tau = NULL
)
Arguments
enroll_rate |
Enrollment rates defined by |
fail_rate |
Failure and dropout rates defined by |
alpha |
One-sided Type I error (strictly between 0 and 1). |
power |
Power ( |
ratio |
Experimental:Control randomization ratio. |
study_duration |
Study duration. |
event |
A numerical vector specifying the targeted events at each analysis. See details. |
rho |
A vector of numbers paring with gamma and tau for MaxCombo test. |
gamma |
A vector of numbers paring with rho and tau for MaxCombo test. |
tau |
Test parameter in RMST. |
w_max |
Test parameter of Magirr-Burman method. |
p_c |
A numerical value of the control arm rate. |
p_e |
A numerical value of the experimental arm rate. |
rd0 |
Risk difference under null hypothesis, default is 0. |
n |
Sample size. If NULL with power input, the sample size will be computed to achieve the targeted power |
Value
A list of design characteristic summary.
Examples
# AHR method ----
library(dplyr)
# Example 1: given power and compute sample size
x <- fixed_design_ahr(
alpha = .025, power = .9,
enroll_rate = define_enroll_rate(duration = 18, rate = 1),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
),
study_duration = 36
)
x %>% summary()
# Example 2: given sample size and compute power
x <- fixed_design_ahr(
alpha = .025,
enroll_rate = define_enroll_rate(duration = 18, rate = 20),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
),
study_duration = 36
)
x %>% summary()
# WLR test with FH weights ----
library(dplyr)
# Example 1: given power and compute sample size
x <- fixed_design_fh(
alpha = .025, power = .9,
enroll_rate = define_enroll_rate(duration = 18, rate = 1),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
),
study_duration = 36,
rho = 1, gamma = 1
)
x %>% summary()
# Example 2: given sample size and compute power
x <- fixed_design_fh(
alpha = .025,
enroll_rate = define_enroll_rate(duration = 18, rate = 20),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
),
study_duration = 36,
rho = 1, gamma = 1
)
x %>% summary()
# LF method ----
library(dplyr)
# Example 1: given power and compute sample size
x <- fixed_design_lf(
alpha = .025, power = .9,
enroll_rate = define_enroll_rate(duration = 18, rate = 1),
fail_rate = define_fail_rate(
duration = 100,
fail_rate = log(2) / 12,
hr = .7,
dropout_rate = .001
),
study_duration = 36
)
x %>% summary()
# Example 2: given sample size and compute power
x <- fixed_design_lf(
alpha = .025,
enroll_rate = define_enroll_rate(duration = 18, rate = 20),
fail_rate = define_fail_rate(
duration = 100,
fail_rate = log(2) / 12,
hr = .7,
dropout_rate = .001
),
study_duration = 36
)
x %>% summary()
# MaxCombo test ----
library(dplyr)
# Example 1: given power and compute sample size
x <- fixed_design_maxcombo(
alpha = .025, power = .9,
enroll_rate = define_enroll_rate(duration = 18, rate = 1),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
),
study_duration = 36,
rho = c(0, 0.5), gamma = c(0, 0), tau = c(-1, -1)
)
x %>% summary()
# Example 2: given sample size and compute power
x <- fixed_design_maxcombo(
alpha = .025,
enroll_rate = define_enroll_rate(duration = 18, rate = 20),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
),
study_duration = 36,
rho = c(0, 0.5), gamma = c(0, 0), tau = c(-1, -1)
)
x %>% summary()
# WLR test with MB weights ----
library(dplyr)
# Example 1: given power and compute sample size
x <- fixed_design_mb(
alpha = .025, power = .9,
enroll_rate = define_enroll_rate(duration = 18, rate = 1),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
),
study_duration = 36,
tau = 4,
w_max = 2
)
x %>% summary()
# Example 2: given sample size and compute power
x <- fixed_design_mb(
alpha = .025,
enroll_rate = define_enroll_rate(duration = 18, rate = 20),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
),
study_duration = 36,
tau = 4,
w_max = 2
)
x %>% summary()
# Milestone method ----
library(dplyr)
# Example 1: given power and compute sample size
x <- fixed_design_milestone(
alpha = .025, power = .9,
enroll_rate = define_enroll_rate(duration = 18, rate = 1),
fail_rate = define_fail_rate(
duration = 100,
fail_rate = log(2) / 12,
hr = .7,
dropout_rate = .001
),
study_duration = 36,
tau = 18
)
x %>% summary()
# Example 2: given sample size and compute power
x <- fixed_design_milestone(
alpha = .025,
enroll_rate = define_enroll_rate(duration = 18, rate = 20),
fail_rate = define_fail_rate(
duration = 100,
fail_rate = log(2) / 12,
hr = .7,
dropout_rate = .001
),
study_duration = 36,
tau = 18
)
x %>% summary()
# Binary endpoint with risk differences ----
library(dplyr)
# Example 1: given power and compute sample size
x <- fixed_design_rd(
alpha = 0.025, power = 0.9, p_c = .15, p_e = .1,
rd0 = 0, ratio = 1
)
x %>% summary()
# Example 2: given sample size and compute power
x <- fixed_design_rd(
alpha = 0.025, power = NULL, p_c = .15, p_e = .1,
rd0 = 0, n = 2000, ratio = 1
)
x %>% summary()
# RMST method ----
library(dplyr)
# Example 1: given power and compute sample size
x <- fixed_design_rmst(
alpha = .025, power = .9,
enroll_rate = define_enroll_rate(duration = 18, rate = 1),
fail_rate = define_fail_rate(
duration = 100,
fail_rate = log(2) / 12,
hr = .7,
dropout_rate = .001
),
study_duration = 36,
tau = 18
)
x %>% summary()
# Example 2: given sample size and compute power
x <- fixed_design_rmst(
alpha = .025,
enroll_rate = define_enroll_rate(duration = 18, rate = 20),
fail_rate = define_fail_rate(
duration = 100,
fail_rate = log(2) / 12,
hr = .7,
dropout_rate = .001
),
study_duration = 36,
tau = 18
)
x %>% summary()
Default boundary generation
Description
gs_b()
is the simplest version of a function to be used with the upper
and lower
arguments in gs_power_npe()
and gs_design_npe()
or the
upper_bound
and lower_bound
arguments in gs_prob_combo()
and
pmvnorm_combo()
.
It simply returns the vector of Z-values in the input vector par
or,
if k
is specified, par[k]
is returned.
Note that if bounds need to change with changing information at analyses,
gs_b()
should not be used.
For instance, for spending function bounds use gs_spending_bound()
.
Usage
gs_b(par = NULL, k = NULL, ...)
Arguments
par |
For |
k |
Is |
... |
Further arguments passed to or from other methods. |
Value
Returns the vector input par
if k
is NULL
, otherwise, par[k]
.
Specification
The contents of this section are shown in PDF user manual only.
Examples
# Simple: enter a vector of length 3 for bound
gs_b(par = 4:2)
# 2nd element of par
gs_b(par = 4:2, k = 2)
# Generate an efficacy bound using a spending function
# Use Lan-DeMets spending approximation of O'Brien-Fleming bound
# as 50%, 75% and 100% of final spending
# Information fraction
IF <- c(.5, .75, 1)
gs_b(par = gsDesign::gsDesign(
alpha = .025, k = length(IF),
test.type = 1, sfu = gsDesign::sfLDOF,
timing = IF
)$upper$bound)
Bound summary table
Description
Summarizes the efficacy and futility bounds for each analysis.
Usage
gs_bound_summary(
x,
digits = 4,
ddigits = 2,
tdigits = 0,
timename = "Month",
alpha = NULL
)
Arguments
x |
Design object. |
digits |
Number of digits past the decimal to be printed in the body of the table. |
ddigits |
Number of digits past the decimal to be printed for the natural parameter delta. |
tdigits |
Number of digits past the decimal point to be shown for estimated timing of each analysis. |
timename |
Text string indicating time unit. |
alpha |
Vector of alpha values to compute additional efficacy columns. |
Value
A data frame
See Also
Examples
library(gsDesign2)
x <- gs_design_ahr(info_frac = c(.25, .75, 1), analysis_time = c(12, 25, 36))
gs_bound_summary(x)
x <- gs_design_wlr(info_frac = c(.25, .75, 1), analysis_time = c(12, 25, 36))
gs_bound_summary(x)
# Report multiple efficacy bounds (only supported for AHR designs)
x <- gs_design_ahr(analysis_time = 1:3*12, alpha = 0.0125)
gs_bound_summary(x, alpha = c(0.025, 0.05))
Conditional power computation with non-constant effect size
Description
Conditional power computation with non-constant effect size
Usage
gs_cp_npe(theta = NULL, info = NULL, a = NULL, b = NULL)
Arguments
theta |
A vector of length two, which specifies the natural parameter for treatment effect.
The first element of |
info |
A vector of two, which specifies the statistical information under the treatment effect |
a |
Interim z-value at analysis i (scalar). |
b |
Future target z-value at analysis j (scalar). |
Details
We assume Z_1
and Z_2
are the z-values at an interim analysis and later analysis, respectively.
We assume further Z_1
and Z_2
are bivariate normal with standard group sequential assumptions
on independent increments where for i=1,2
E(Z_i) = \theta_i\sqrt{I_i}
Var(Z_i) = 1/I_i
Cov(Z_1, Z_2) = t \equiv I_1/I_2
where \theta_1, \theta_2
are real values and 0<I_1<I_2
.
See https://merck.github.io/gsDesign2/articles/story-npe-background.html for assumption details.
Returned value is
P(Z_2 > b \mid Z_1 = a) = 1 - \Phi\left(\frac{b - \sqrt{t}a - \sqrt{I_2}(\theta_2 - \theta_1\sqrt{t})}{\sqrt{1 - t}}\right)
Value
A scalar with the conditional power P(Z_2>b\mid Z_1=a)
.
Examples
library(gsDesign2)
library(dplyr)
# Calculate conditional power under arbitrary theta and info
# In practice, the value of theta and info commonly comes from a design.
# More examples are available at the pkgdown vignettes.
gs_cp_npe(theta = c(.1, .2),
info = c(15, 35),
a = 1.5, b = 1.96)
Create npsurvSS arm object
Description
Create npsurvSS arm object
Usage
gs_create_arm(enroll_rate, fail_rate, ratio, total_time = 1e+06)
Arguments
enroll_rate |
Enrollment rates from |
fail_rate |
Failure and dropout rates from |
ratio |
Experimental:Control randomization ratio. |
total_time |
Total analysis time. |
Value
A list of the two arms.
Specification
The contents of this section are shown in PDF user manual only.
Examples
enroll_rate <- define_enroll_rate(
duration = c(2, 2, 10),
rate = c(3, 6, 9)
)
fail_rate <- define_fail_rate(
duration = c(3, 100),
fail_rate = log(2) / c(9, 18),
hr = c(.9, .6),
dropout_rate = .001
)
gs_create_arm(enroll_rate, fail_rate, ratio = 1)
Calculate sample size and bounds given targeted power and Type I error in group sequential design using average hazard ratio under non-proportional hazards
Description
Calculate sample size and bounds given targeted power and Type I error in group sequential design using average hazard ratio under non-proportional hazards
Usage
gs_design_ahr(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = define_fail_rate(duration = c(3, 100), fail_rate = log(2)/c(9, 18), hr =
c(0.9, 0.6), dropout_rate = 0.001),
alpha = 0.025,
beta = 0.1,
info_frac = NULL,
analysis_time = 36,
ratio = 1,
binding = FALSE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = alpha),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = beta),
h1_spending = TRUE,
test_upper = TRUE,
test_lower = TRUE,
info_scale = c("h0_h1_info", "h0_info", "h1_info"),
r = 18,
tol = 1e-06,
interval = c(0.01, 1000)
)
Arguments
enroll_rate |
Enrollment rates defined by |
fail_rate |
Failure and dropout rates defined by |
alpha |
One-sided Type I error. |
beta |
Type II error. |
info_frac |
Targeted information fraction for analyses. See details. |
analysis_time |
Targeted calendar timing of analyses. See details. |
ratio |
Experimental:Control randomization ratio. |
binding |
Indicator of whether futility bound is binding;
default of |
upper |
Function to compute upper bound.
|
upar |
Parameters passed to
|
lower |
Function to compute lower bound, which can be set up similarly as |
lpar |
Parameters passed to |
h1_spending |
Indicator that lower bound to be set by spending
under alternate hypothesis (input |
test_upper |
Indicator of which analyses should include an upper
(efficacy) bound; single value of |
test_lower |
Indicator of which analyses should include an lower bound;
single value of |
info_scale |
Information scale for calculation. Options are:
|
r |
Integer value controlling grid for numerical integration as in
Jennison and Turnbull (2000); default is 18, range is 1 to 80.
Larger values provide larger number of grid points and greater accuracy.
Normally, |
tol |
Tolerance parameter for boundary convergence (on Z-scale); normally not changed by the user. |
interval |
An interval presumed to include the times at which
expected event count is equal to targeted event.
Normally, this can be ignored by the user as it is set to |
Details
The parameters info_frac
and analysis_time
are used to determine the timing for interim and final analyses.
If the interim analysis is determined by targeted information fraction and the study duration is known, then
info_frac
is a numerical vector where each element (greater than 0 and less than or equal to 1) represents the information fraction for each analysis. Theanalysis_time
, which defaults to 36, indicates the time for the final analysis.If interim analyses are determined solely by the targeted calendar analysis timing from start of study, then
analysis_time
will be a vector specifying the time for each analysis.If both the targeted analysis time and the targeted information fraction are utilized for a given analysis, then timing will be the maximum of the two with both
info_frac
andanalysis_time
provided as vectors.
Value
A list with input parameters, enrollment rate, analysis, and bound.
The
$input
is a list includingalpha
,beta
,ratio
, etc.The
$enroll_rate
is a table showing the enrollment for arriving the targeted power (1 - beta
).The
$fail_rate
is a table showing the failure and dropout rates, which is the same as input.The
$bound
is a table summarizing the efficacy and futility bound per analysis.The
analysis
is a table summarizing the analysis time, sample size, events, average HR, treatment effect and statistical information per analysis.
Specification
The contents of this section are shown in PDF user manual only.
Examples
library(gsDesign)
library(gsDesign2)
library(dplyr)
# Example 1 ----
# call with defaults
gs_design_ahr()
# Example 2 ----
# Single analysis
gs_design_ahr(analysis_time = 40)
# Example 3 ----
# Multiple analysis_time
gs_design_ahr(analysis_time = c(12, 24, 36))
# Example 4 ----
# Specified information fraction
gs_design_ahr(info_frac = c(.25, .75, 1), analysis_time = 36)
# Example 5 ----
# multiple analysis times & info_frac
# driven by times
gs_design_ahr(info_frac = c(.25, .75, 1), analysis_time = c(12, 25, 36))
# driven by info_frac
gs_design_ahr(info_frac = c(1 / 3, .8, 1), analysis_time = c(12, 25, 36))
# Example 6 ----
# 2-sided symmetric design with O'Brien-Fleming spending
gs_design_ahr(
analysis_time = c(12, 24, 36),
binding = TRUE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
h1_spending = FALSE
)
# 2-sided asymmetric design with O'Brien-Fleming upper spending
# Pocock lower spending under H1 (NPH)
gs_design_ahr(
analysis_time = c(12, 24, 36),
binding = TRUE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDPocock, total_spend = 0.1, param = NULL, timing = NULL),
h1_spending = TRUE
)
# Example 7 ----
gs_design_ahr(
alpha = 0.0125,
analysis_time = c(12, 24, 36),
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.0125, param = NULL, timing = NULL),
lower = gs_b,
lpar = rep(-Inf, 3)
)
gs_design_ahr(
alpha = 0.0125,
analysis_time = c(12, 24, 36),
upper = gs_b,
upar = gsDesign::gsDesign(
k = 3, test.type = 1, n.I = c(.25, .75, 1),
sfu = sfLDOF, sfupar = NULL, alpha = 0.0125
)$upper$bound,
lower = gs_b,
lpar = rep(-Inf, 3)
)
Group sequential design using MaxCombo test under non-proportional hazards
Description
Group sequential design using MaxCombo test under non-proportional hazards
Usage
gs_design_combo(
enroll_rate = define_enroll_rate(duration = 12, rate = 500/12),
fail_rate = define_fail_rate(duration = c(4, 100), fail_rate = log(2)/15, hr = c(1,
0.6), dropout_rate = 0.001),
fh_test = rbind(data.frame(rho = 0, gamma = 0, tau = -1, test = 1, analysis = 1:3,
analysis_time = c(12, 24, 36)), data.frame(rho = c(0, 0.5), gamma = 0.5, tau = -1,
test = 2:3, analysis = 3, analysis_time = 36)),
ratio = 1,
alpha = 0.025,
beta = 0.2,
binding = FALSE,
upper = gs_b,
upar = c(3, 2, 1),
lower = gs_b,
lpar = c(-1, 0, 1),
algorithm = mvtnorm::GenzBretz(maxpts = 1e+05, abseps = 1e-05),
n_upper_bound = 1000,
...
)
Arguments
enroll_rate |
Enrollment rates defined by |
fail_rate |
Failure and dropout rates defined by |
fh_test |
A data frame to summarize the test in each analysis. See examples for its data structure. |
ratio |
Experimental:Control randomization ratio. |
alpha |
One-sided Type I error. |
beta |
Type II error. |
binding |
Indicator of whether futility bound is binding;
default of |
upper |
Function to compute upper bound.
|
upar |
Parameters passed to
|
lower |
Function to compute lower bound, which can be set up similarly as |
lpar |
Parameters passed to |
algorithm |
an object of class |
n_upper_bound |
A numeric value of upper limit of sample size. |
... |
Additional parameters passed to mvtnorm::pmvnorm. |
Value
A list with input parameters, enrollment rate, analysis, and bound.
Examples
# The example is slow to run
library(dplyr)
library(mvtnorm)
library(gsDesign)
enroll_rate <- define_enroll_rate(
duration = 12,
rate = 500 / 12
)
fail_rate <- define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 15, # median survival 15 month
hr = c(1, .6),
dropout_rate = 0.001
)
fh_test <- rbind(
data.frame(
rho = 0, gamma = 0, tau = -1,
test = 1, analysis = 1:3, analysis_time = c(12, 24, 36)
),
data.frame(
rho = c(0, 0.5), gamma = 0.5, tau = -1,
test = 2:3, analysis = 3, analysis_time = 36
)
)
x <- gsSurv(
k = 3,
test.type = 4,
alpha = 0.025,
beta = 0.2,
astar = 0,
timing = 1,
sfu = sfLDOF,
sfupar = 0,
sfl = sfLDOF,
sflpar = 0,
lambdaC = 0.1,
hr = 0.6,
hr0 = 1,
eta = 0.01,
gamma = 10,
R = 12,
S = NULL,
T = 36,
minfup = 24,
ratio = 1
)
# Example 1 ----
# User-defined boundary
gs_design_combo(
enroll_rate,
fail_rate,
fh_test,
alpha = 0.025, beta = 0.2,
ratio = 1,
binding = FALSE,
upar = x$upper$bound,
lpar = x$lower$bound
)
# Example 2 ----
# Boundary derived by spending function
gs_design_combo(
enroll_rate,
fail_rate,
fh_test,
alpha = 0.025,
beta = 0.2,
ratio = 1,
binding = FALSE,
upper = gs_spending_combo,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), # alpha spending
lower = gs_spending_combo,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2), # beta spending
)
Group sequential design computation with non-constant effect and information
Description
Derives group sequential design size, bounds and boundary crossing probabilities based on proportionate information and effect size at analyses. It allows a non-constant treatment effect over time, but also can be applied for the usual homogeneous effect size designs. It requires treatment effect and proportionate statistical information at each analysis as well as a method of deriving bounds, such as spending. The routine enables two things not available in the gsDesign package:
non-constant effect, 2) more flexibility in boundary selection. For many applications, the non-proportional-hazards design function
gs_design_nph()
will be used; it calls this function. Initial bound types supported are 1) spending bounds,fixed bounds, and 3) Haybittle-Peto-like bounds. The requirement is to have a boundary update method that can each bound without knowledge of future bounds. As an example, bounds based on conditional power that require knowledge of all future bounds are not supported by this routine; a more limited conditional power method will be demonstrated. Boundary family designs Wang-Tsiatis designs including the original (non-spending-function-based) O'Brien-Fleming and Pocock designs are not supported by
gs_power_npe()
.
Usage
gs_design_npe(
theta = 0.1,
theta0 = 0,
theta1 = theta,
info = 1,
info0 = NULL,
info1 = NULL,
info_scale = c("h0_h1_info", "h0_info", "h1_info"),
alpha = 0.025,
beta = 0.1,
upper = gs_b,
upar = qnorm(0.975),
lower = gs_b,
lpar = -Inf,
test_upper = TRUE,
test_lower = TRUE,
binding = FALSE,
r = 18,
tol = 1e-06
)
Arguments
theta |
Natural parameter for group sequential design representing expected incremental drift at all analyses; used for power calculation. |
theta0 |
Natural parameter used for upper bound spending;
if |
theta1 |
Natural parameter used for lower bound spending;
if |
info |
Proportionate statistical information at
all analyses for input |
info0 |
Proportionate statistical information under null hypothesis, if different than alternative; impacts null hypothesis bound calculation. |
info1 |
Proportionate statistical information under alternate hypothesis; impacts null hypothesis bound calculation. |
info_scale |
Information scale for calculation. Options are:
|
alpha |
One-sided Type I error. |
beta |
Type II error. |
upper |
Function to compute upper bound. |
upar |
Parameters passed to the function provided in |
lower |
Function to compare lower bound. |
lpar |
Parameters passed to the function provided in |
test_upper |
Indicator of which analyses should include
an upper (efficacy) bound; single value of |
test_lower |
Indicator of which analyses should include an lower bound;
single value of |
binding |
Indicator of whether futility bound is binding;
default of |
r |
Integer value controlling grid for numerical integration
as in Jennison and Turnbull (2000); default is 18, range is 1 to 80.
Larger values provide larger number of grid points and greater accuracy.
Normally |
tol |
Tolerance parameter for boundary convergence (on Z-scale). |
Details
The inputs info
and info0
should be
vectors of the same length with increasing positive numbers.
The design returned will change these by some constant scale
factor to ensure the design has power 1 - beta
.
The bound specifications in upper
, lower
, upar
, lpar
will be used to ensure Type I error and other boundary properties are as specified.
Value
A tibble with columns analysis, bound, z, probability, theta, info, info0.
Specification
The contents of this section are shown in PDF user manual only.
Author(s)
Keaven Anderson keaven_anderson@merck.com
Examples
library(dplyr)
library(gsDesign)
# Example 1 ----
# Single analysis
# Lachin book p 71 difference of proportions example
pc <- .28 # Control response rate
pe <- .40 # Experimental response rate
p0 <- (pc + pe) / 2 # Ave response rate under H0
# Information per increment of 1 in sample size
info0 <- 1 / (p0 * (1 - p0) * 4)
info <- 1 / (pc * (1 - pc) * 2 + pe * (1 - pe) * 2)
# Result should round up to next even number = 652
# Divide information needed under H1 by information per patient added
gs_design_npe(theta = pe - pc, info = info, info0 = info0)
# Example 2 ----
# Fixed bound
x <- gs_design_npe(
alpha = 0.0125,
theta = c(.1, .2, .3),
info = (1:3) * 80,
info0 = (1:3) * 80,
upper = gs_b,
upar = gsDesign::gsDesign(k = 3, sfu = gsDesign::sfLDOF, alpha = 0.0125)$upper$bound,
lower = gs_b,
lpar = c(-1, 0, 0)
)
x
# Same upper bound; this represents non-binding Type I error and will total 0.025
gs_power_npe(
theta = rep(0, 3),
info = (x %>% filter(bound == "upper"))$info,
upper = gs_b,
upar = (x %>% filter(bound == "upper"))$z,
lower = gs_b,
lpar = rep(-Inf, 3)
)
# Example 3 ----
# Spending bound examples
# Design with futility only at analysis 1; efficacy only at analyses 2, 3
# Spending bound for efficacy; fixed bound for futility
# NOTE: test_upper and test_lower DO NOT WORK with gs_b; must explicitly make bounds infinite
# test_upper and test_lower DO WORK with gs_spending_bound
gs_design_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
info0 = (1:3) * 40,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_b,
lpar = c(-1, -Inf, -Inf),
test_upper = c(FALSE, TRUE, TRUE)
)
# one can try `info_scale = "h1_info"` or `info_scale = "h0_info"` here
gs_design_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
info0 = (1:3) * 30,
info_scale = "h1_info",
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_b,
lpar = c(-1, -Inf, -Inf),
test_upper = c(FALSE, TRUE, TRUE)
)
# Example 4 ----
# Spending function bounds
# 2-sided asymmetric bounds
# Lower spending based on non-zero effect
gs_design_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
info0 = (1:3) * 30,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfHSD, total_spend = 0.1, param = -1, timing = NULL)
)
# Example 5 ----
# Two-sided symmetric spend, O'Brien-Fleming spending
# Typically, 2-sided bounds are binding
xx <- gs_design_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
binding = TRUE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL)
)
xx
# Re-use these bounds under alternate hypothesis
# Always use binding = TRUE for power calculations
gs_power_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
binding = TRUE,
upper = gs_b,
lower = gs_b,
upar = (xx %>% filter(bound == "upper"))$z,
lpar = -(xx %>% filter(bound == "upper"))$z
)
Group sequential design of binary outcome measuring in risk difference
Description
Group sequential design of binary outcome measuring in risk difference
Usage
gs_design_rd(
p_c = tibble::tibble(stratum = "All", rate = 0.2),
p_e = tibble::tibble(stratum = "All", rate = 0.15),
info_frac = 1:3/3,
rd0 = 0,
alpha = 0.025,
beta = 0.1,
ratio = 1,
stratum_prev = NULL,
weight = c("unstratified", "ss", "invar"),
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(0.1), rep(-Inf, 2)),
test_upper = TRUE,
test_lower = TRUE,
info_scale = c("h0_h1_info", "h0_info", "h1_info"),
binding = FALSE,
r = 18,
tol = 1e-06,
h1_spending = TRUE
)
Arguments
p_c |
Rate at the control group. |
p_e |
Rate at the experimental group. |
info_frac |
Statistical information fraction. |
rd0 |
Treatment effect under super-superiority designs, the default is 0. |
alpha |
One-sided Type I error. |
beta |
Type II error. |
ratio |
Experimental:Control randomization ratio (not yet implemented). |
stratum_prev |
Randomization ratio of different stratum.
If it is unstratified design then |
weight |
The weighting scheme for stratified population. |
upper |
Function to compute upper bound. |
lower |
Function to compute lower bound. |
upar |
Parameters passed to |
lpar |
Parameters passed to |
test_upper |
Indicator of which analyses should include an upper
(efficacy) bound; single value of |
test_lower |
Indicator of which analyses should include an lower bound;
single value of |
info_scale |
Information scale for calculation. Options are:
|
binding |
Indicator of whether futility bound is binding;
default of |
r |
Integer value controlling grid for numerical integration
as in Jennison and Turnbull (2000); default is 18, range is 1 to 80.
Larger values provide larger number of grid points and greater accuracy.
Normally, |
tol |
Tolerance parameter for boundary convergence (on Z-scale). |
h1_spending |
Indicator that lower bound to be set by
spending under alternate hypothesis (input |
Details
To be added.
Value
A list with input parameters, analysis, and bound.
Examples
library(gsDesign)
# Example 1 ----
# unstratified group sequential design
x <- gs_design_rd(
p_c = tibble::tibble(stratum = "All", rate = .2),
p_e = tibble::tibble(stratum = "All", rate = .15),
info_frac = c(0.7, 1),
rd0 = 0,
alpha = .025,
beta = .1,
ratio = 1,
stratum_prev = NULL,
weight = "unstratified",
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 2, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
y <- gs_power_rd(
p_c = tibble::tibble(stratum = "All", rate = .2),
p_e = tibble::tibble(stratum = "All", rate = .15),
n = tibble::tibble(stratum = "All", n = x$analysis$n, analysis = 1:2),
rd0 = 0,
ratio = 1,
weight = "unstratified",
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 2, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# The above 2 design share the same power with the same sample size and treatment effect
x$bound$probability[x$bound$bound == "upper" & x$bound$analysis == 2]
y$bound$probability[y$bound$bound == "upper" & y$bound$analysis == 2]
# Example 2 ----
# stratified group sequential design
gs_design_rd(
p_c = tibble::tibble(
stratum = c("biomarker positive", "biomarker negative"),
rate = c(.2, .25)
),
p_e = tibble::tibble(
stratum = c("biomarker positive", "biomarker negative"),
rate = c(.15, .22)
),
info_frac = c(0.7, 1),
rd0 = 0,
alpha = .025,
beta = .1,
ratio = 1,
stratum_prev = tibble::tibble(
stratum = c("biomarker positive", "biomarker negative"),
prevalence = c(.4, .6)
),
weight = "ss",
upper = gs_spending_bound, lower = gs_b,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lpar = rep(-Inf, 2)
)
Group sequential design using weighted log-rank test under non-proportional hazards
Description
Group sequential design using weighted log-rank test under non-proportional hazards
Usage
gs_design_wlr(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = tibble(stratum = "All", duration = c(3, 100), fail_rate = log(2)/c(9, 18),
hr = c(0.9, 0.6), dropout_rate = rep(0.001, 2)),
weight = "logrank",
approx = "asymptotic",
alpha = 0.025,
beta = 0.1,
ratio = 1,
info_frac = NULL,
info_scale = c("h0_h1_info", "h0_info", "h1_info"),
analysis_time = 36,
binding = FALSE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = alpha),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = beta),
test_upper = TRUE,
test_lower = TRUE,
h1_spending = TRUE,
r = 18,
tol = 1e-06,
interval = c(0.01, 1000)
)
Arguments
enroll_rate |
Enrollment rates defined by |
fail_rate |
Failure and dropout rates defined by |
weight |
Weight of weighted log rank test:
|
approx |
Approximate estimation method for Z statistics.
|
alpha |
One-sided Type I error. |
beta |
Type II error. |
ratio |
Experimental:Control randomization ratio. |
info_frac |
Targeted information fraction for analyses. See details. |
info_scale |
Information scale for calculation. Options are:
|
analysis_time |
Targeted calendar timing of analyses. See details. |
binding |
Indicator of whether futility bound is binding;
default of |
upper |
Function to compute upper bound.
|
upar |
Parameters passed to
|
lower |
Function to compute lower bound, which can be set up similarly as |
lpar |
Parameters passed to |
test_upper |
Indicator of which analyses should include an upper
(efficacy) bound; single value of |
test_lower |
Indicator of which analyses should include an lower bound;
single value of |
h1_spending |
Indicator that lower bound to be set by spending
under alternate hypothesis (input |
r |
Integer value controlling grid for numerical integration as in
Jennison and Turnbull (2000); default is 18, range is 1 to 80.
Larger values provide larger number of grid points and greater accuracy.
Normally, |
tol |
Tolerance parameter for boundary convergence (on Z-scale); normally not changed by the user. |
interval |
An interval presumed to include the times at which
expected event count is equal to targeted event.
Normally, this can be ignored by the user as it is set to |
Value
A list with input parameters, enrollment rate, analysis, and bound.
Specification
The contents of this section are shown in PDF user manual only.
Examples
library(dplyr)
library(mvtnorm)
library(gsDesign)
library(gsDesign2)
# set enrollment rates
enroll_rate <- define_enroll_rate(duration = 12, rate = 1)
# set failure rates
fail_rate <- define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 15, # median survival 15 month
hr = c(1, .6),
dropout_rate = 0.001
)
# Example 1 ----
# Information fraction driven design
gs_design_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
ratio = 1,
alpha = 0.025, beta = 0.2,
weight = list(method = "mb", param = list(tau = Inf, w_max = 2)),
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2),
analysis_time = 36,
info_frac = c(0.6, 1)
)
# Example 2 ----
# Calendar time driven design
gs_design_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
ratio = 1,
alpha = 0.025, beta = 0.2,
weight = list(method = "mb", param = list(tau = Inf, w_max = 2)),
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2),
analysis_time = c(24, 36),
info_frac = NULL
)
# Example 3 ----
# Both calendar time and information fraction driven design
gs_design_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
ratio = 1,
alpha = 0.025, beta = 0.2,
weight = list(method = "mb", param = list(tau = Inf, w_max = 2)),
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2),
analysis_time = c(24, 36),
info_frac = c(0.6, 1)
)
Information and effect size based on AHR approximation
Description
Based on piecewise enrollment rate, failure rate, and dropout rates computes approximate information and effect size using an average hazard ratio model.
Usage
gs_info_ahr(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = define_fail_rate(duration = c(3, 100), fail_rate = log(2)/c(9, 18), hr =
c(0.9, 0.6), dropout_rate = 0.001),
ratio = 1,
event = NULL,
analysis_time = NULL,
interval = c(0.01, 1000)
)
Arguments
enroll_rate |
Enrollment rates from |
fail_rate |
Failure and dropout rates from |
ratio |
Experimental:Control randomization ratio. |
event |
Targeted minimum events at each analysis. |
analysis_time |
Targeted minimum study duration at each analysis. |
interval |
An interval that is presumed to include the time at which expected event count is equal to targeted event. |
Details
The ahr()
function computes statistical information at targeted
event times. The expected_time()
function is used to get events and
average HR at targeted analysis_time
.
Value
A data frame with columns analysis
, time
, ahr
, event
, theta
, info
, info0
.
The columns info
and info0
contain statistical information under H1, H0, respectively.
For analysis k
, time[k]
is the maximum of analysis_time[k]
and the
expected time required to accrue the targeted event[k]
.
ahr
is the expected average hazard ratio at each analysis.
Specification
The contents of this section are shown in PDF user manual only.
Examples
library(gsDesign)
library(gsDesign2)
# Example 1 ----
# Only put in targeted events
gs_info_ahr(event = c(30, 40, 50))
# Example 2 ----
# Only put in targeted analysis times
gs_info_ahr(analysis_time = c(18, 27, 36))
# Example 3 ----
# Some analysis times after time at which targeted event accrue
# Check that both Time >= input analysis_time and event >= input event
gs_info_ahr(event = c(30, 40, 50), analysis_time = c(16, 19, 26))
gs_info_ahr(event = c(30, 40, 50), analysis_time = c(14, 20, 24))
Information and effect size for MaxCombo test
Description
Information and effect size for MaxCombo test
Usage
gs_info_combo(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = define_fail_rate(duration = c(3, 100), fail_rate = log(2)/c(9, 18), hr =
c(0.9, 0.6), dropout_rate = 0.001),
ratio = 1,
event = NULL,
analysis_time = NULL,
rho,
gamma,
tau = rep(-1, length(rho)),
approx = "asymptotic"
)
Arguments
enroll_rate |
An |
fail_rate |
A |
ratio |
Experimental:Control randomization ratio (not yet implemented). |
event |
Targeted events at each analysis. |
analysis_time |
Minimum time of analysis. |
rho |
Weighting parameters. |
gamma |
Weighting parameters. |
tau |
Weighting parameters. |
approx |
Approximation method. |
Value
A tibble with columns as test index, analysis index, analysis time, sample size, number of events, ahr, delta, sigma2, theta, and statistical information.
Examples
gs_info_combo(rho = c(0, 0.5), gamma = c(0.5, 0), analysis_time = c(12, 24))
Information and effect size under risk difference
Description
Information and effect size under risk difference
Usage
gs_info_rd(
p_c = tibble::tibble(stratum = "All", rate = 0.2),
p_e = tibble::tibble(stratum = "All", rate = 0.15),
n = tibble::tibble(stratum = "All", n = c(100, 200, 300), analysis = 1:3),
rd0 = 0,
ratio = 1,
weight = c("unstratified", "ss", "invar")
)
Arguments
p_c |
Rate at the control group. |
p_e |
Rate at the experimental group. |
n |
Sample size. |
rd0 |
The risk difference under H0. |
ratio |
Experimental:Control randomization ratio. |
weight |
Weighting method, can be |
Value
A tibble with columns as analysis index, sample size, risk difference, risk difference under null hypothesis, theta1 (standardized treatment effect under alternative hypothesis), theta0 (standardized treatment effect under null hypothesis), and statistical information.
Examples
# Example 1 ----
# unstratified case with H0: rd0 = 0
gs_info_rd(
p_c = tibble::tibble(stratum = "All", rate = .15),
p_e = tibble::tibble(stratum = "All", rate = .1),
n = tibble::tibble(stratum = "All", n = c(100, 200, 300), analysis = 1:3),
rd0 = 0,
ratio = 1
)
# Example 2 ----
# unstratified case with H0: rd0 != 0
gs_info_rd(
p_c = tibble::tibble(stratum = "All", rate = .2),
p_e = tibble::tibble(stratum = "All", rate = .15),
n = tibble::tibble(stratum = "All", n = c(100, 200, 300), analysis = 1:3),
rd0 = 0.005,
ratio = 1
)
# Example 3 ----
# stratified case under sample size weighting and H0: rd0 = 0
gs_info_rd(
p_c = tibble::tibble(stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25)),
p_e = tibble::tibble(stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19)),
n = tibble::tibble(
stratum = rep(c("S1", "S2", "S3"), each = 3),
analysis = rep(1:3, 3),
n = c(50, 100, 200, 40, 80, 160, 60, 120, 240)
),
rd0 = 0,
ratio = 1,
weight = "ss"
)
# Example 4 ----
# stratified case under inverse variance weighting and H0: rd0 = 0
gs_info_rd(
p_c = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.15, .2, .25)
),
p_e = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.1, .16, .19)
),
n = tibble::tibble(
stratum = rep(c("S1", "S2", "S3"), each = 3),
analysis = rep(1:3, 3),
n = c(50, 100, 200, 40, 80, 160, 60, 120, 240)
),
rd0 = 0,
ratio = 1,
weight = "invar"
)
# Example 5 ----
# stratified case under sample size weighting and H0: rd0 != 0
gs_info_rd(
p_c = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.15, .2, .25)
),
p_e = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.1, .16, .19)
),
n = tibble::tibble(
stratum = rep(c("S1", "S2", "S3"), each = 3),
analysis = rep(1:3, 3),
n = c(50, 100, 200, 40, 80, 160, 60, 120, 240)
),
rd0 = 0.02,
ratio = 1,
weight = "ss"
)
# Example 6 ----
# stratified case under inverse variance weighting and H0: rd0 != 0
gs_info_rd(
p_c = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.15, .2, .25)
),
p_e = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.1, .16, .19)
),
n = tibble::tibble(
stratum = rep(c("S1", "S2", "S3"), each = 3),
analysis = rep(1:3, 3),
n = c(50, 100, 200, 40, 80, 160, 60, 120, 240)
),
rd0 = 0.02,
ratio = 1,
weight = "invar"
)
# Example 7 ----
# stratified case under inverse variance weighting and H0: rd0 != 0 and
# rd0 difference for different statum
gs_info_rd(
p_c = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.15, .2, .25)
),
p_e = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.1, .16, .19)
),
n = tibble::tibble(
stratum = rep(c("S1", "S2", "S3"), each = 3),
analysis = rep(1:3, 3),
n = c(50, 100, 200, 40, 80, 160, 60, 120, 240)
),
rd0 = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rd0 = c(0.01, 0.02, 0.03)
),
ratio = 1,
weight = "invar"
)
Information and effect size for weighted log-rank test
Description
Based on piecewise enrollment rate, failure rate, and dropout rates computes approximate information and effect size using an average hazard ratio model.
Usage
gs_info_wlr(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = define_fail_rate(duration = c(3, 100), fail_rate = log(2)/c(9, 18), hr =
c(0.9, 0.6), dropout_rate = 0.001),
ratio = 1,
event = NULL,
analysis_time = NULL,
weight = "logrank",
approx = "asymptotic",
interval = c(0.01, 1000)
)
Arguments
enroll_rate |
An |
fail_rate |
Failure and dropout rates. |
ratio |
Experimental:Control randomization ratio. |
event |
Targeted minimum events at each analysis. |
analysis_time |
Targeted minimum study duration at each analysis. |
weight |
Weight of weighted log rank test:
|
approx |
Approximate estimation method for Z statistics.
|
interval |
An interval that is presumed to include the time at which expected event count is equal to targeted event. |
Details
The ahr()
function computes statistical information at targeted event times.
The expected_time()
function is used to get events and average HR at
targeted analysis_time
.
Value
A tibble with columns Analysis, Time, N, Events, AHR, delta, sigma2,
theta, info, info0.
info
and info0
contain statistical information under H1, H0, respectively.
For analysis k
, Time[k]
is the maximum of analysis_time[k]
and the
expected time required to accrue the targeted event[k]
.
AHR
is the expected average hazard ratio at each analysis.
Examples
library(gsDesign2)
# Set enrollment rates
enroll_rate <- define_enroll_rate(duration = 12, rate = 500 / 12)
# Set failure rates
fail_rate <- define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 15, # median survival 15 month
hr = c(1, .6),
dropout_rate = 0.001
)
# Set the targeted number of events and analysis time
event <- c(30, 40, 50)
analysis_time <- c(10, 24, 30)
gs_info_wlr(
enroll_rate = enroll_rate, fail_rate = fail_rate,
event = event, analysis_time = analysis_time
)
Group sequential design power using average hazard ratio under non-proportional hazards
Description
Calculate power given the sample size in group sequential design power using average hazard ratio under non-proportional hazards.
Usage
gs_power_ahr(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = define_fail_rate(duration = c(3, 100), fail_rate = log(2)/c(9, 18), hr =
c(0.9, 0.6), dropout_rate = rep(0.001, 2)),
event = c(30, 40, 50),
analysis_time = NULL,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = NULL),
test_lower = TRUE,
test_upper = TRUE,
ratio = 1,
binding = FALSE,
h1_spending = TRUE,
info_scale = c("h0_h1_info", "h0_info", "h1_info"),
r = 18,
tol = 1e-06,
interval = c(0.01, 1000),
integer = FALSE
)
Arguments
enroll_rate |
Enrollment rates defined by |
fail_rate |
Failure and dropout rates defined by |
event |
A numerical vector specifying the targeted events at each analysis. See details. |
analysis_time |
Targeted calendar timing of analyses. See details. |
upper |
Function to compute upper bound.
|
upar |
Parameters passed to
|
lower |
Function to compute lower bound, which can be set up similarly as |
lpar |
Parameters passed to |
test_lower |
Indicator of which analyses should include an lower bound;
single value of |
test_upper |
Indicator of which analyses should include an upper
(efficacy) bound; single value of |
ratio |
Experimental:Control randomization ratio. |
binding |
Indicator of whether futility bound is binding;
default of |
h1_spending |
Indicator that lower bound to be set by spending
under alternate hypothesis (input |
info_scale |
Information scale for calculation. Options are:
|
r |
Integer value controlling grid for numerical integration as in
Jennison and Turnbull (2000); default is 18, range is 1 to 80.
Larger values provide larger number of grid points and greater accuracy.
Normally, |
tol |
Tolerance parameter for boundary convergence (on Z-scale); normally not changed by the user. |
interval |
An interval presumed to include the times at which
expected event count is equal to targeted event.
Normally, this can be ignored by the user as it is set to |
integer |
Indicator of whether integer sample size and events are intended. This argument is
used when using |
Details
Note that time units are arbitrary, but should be the same for all rate parameters in enroll_rate
, fail_rate
, and analysis_time
.
Computed bounds satisfy input upper bound specification in
upper
, upar
, and lower bound specification in lower
, lpar
.
ahr()
computes statistical information at targeted event times.
The expected_time()
function is used to get events and average HR at
targeted analysis_time
.
The parameters event
and analysis_time
are used to determine the timing for interim and final analyses.
If analysis timing is to be determined by targeted events, then
event
is a numerical vector specifying the targeted events for each analysis; note that this can be NULL.If interim analysis is determined by targeted calendar timing relative to start of enrollment, then
analysis_time
will be a vector specifying the calendar time from start of study for each analysis; note that this can be NULL.A corresponding element of
event
oranalysis_time
should be provided for each analysis.If both
event[i]
andanalysis[i]
are provided for analysisi
, then the time corresponding to the later of these is used for analysisi
.
Value
A list with input parameters, enrollment rate, analysis, and bound.
-
$input
a list includingalpha
,beta
,ratio
, etc. -
$enroll_rate
a table showing the enrollment, which is the same as input. -
$fail_rate
a table showing the failure and dropout rates, which is the same as input. -
$bound
a table summarizing the efficacy and futility bound at each analysis. -
analysis
a table summarizing the analysis time, sample size, events, average HR, treatment effect and statistical information at each analysis.
Specification
The contents of this section are shown in PDF user manual only.
Examples
library(gsDesign2)
library(dplyr)
# Example 1 ----
# The default output of `gs_power_ahr()` is driven by events,
# i.e., `event = c(30, 40, 50)`, `analysis_time = NULL`
gs_power_ahr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1))
# Example 2 ----
# 2-sided symmetric O'Brien-Fleming spending bound, driven by analysis time,
# i.e., `event = NULL`, `analysis_time = c(12, 24, 36)`
gs_power_ahr(
analysis_time = c(12, 24, 36),
event = NULL,
binding = TRUE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025)
)
# Example 3 ----
# 2-sided symmetric O'Brien-Fleming spending bound, driven by event,
# i.e., `event = c(20, 50, 70)`, `analysis_time = NULL`
# Note that this assumes targeted final events for the design is 70 events.
# If actual targeted final events were 65, then `timing = c(20, 50, 70) / 65`
# would be added to `upar` and `lpar` lists.
# NOTE: at present the computed information fraction in output `analysis` is based
# on 70 events rather than 65 events when the `timing` argument is used in this way.
# A vignette on this topic will be forthcoming.
gs_power_ahr(
analysis_time = NULL,
event = c(20, 50, 70),
binding = TRUE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025)
)
# Example 4 ----
# 2-sided symmetric O'Brien-Fleming spending bound,
# driven by both `event` and `analysis_time`, i.e.,
# both `event` and `analysis_time` are not `NULL`,
# then the analysis will driven by the maximal one, i.e.,
# Time = max(analysis_time, calculated Time for targeted event)
# Events = max(events, calculated events for targeted analysis_time)
gs_power_ahr(
analysis_time = c(12, 24, 36),
event = c(30, 40, 50), h1_spending = FALSE,
binding = TRUE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025)
)
Group sequential design power using MaxCombo test under non-proportional hazards
Description
Group sequential design power using MaxCombo test under non-proportional hazards
Usage
gs_power_combo(
enroll_rate = define_enroll_rate(duration = 12, rate = 500/12),
fail_rate = define_fail_rate(duration = c(4, 100), fail_rate = log(2)/15, hr = c(1,
0.6), dropout_rate = 0.001),
fh_test = rbind(data.frame(rho = 0, gamma = 0, tau = -1, test = 1, analysis = 1:3,
analysis_time = c(12, 24, 36)), data.frame(rho = c(0, 0.5), gamma = 0.5, tau = -1,
test = 2:3, analysis = 3, analysis_time = 36)),
ratio = 1,
binding = FALSE,
upper = gs_b,
upar = c(3, 2, 1),
lower = gs_b,
lpar = c(-1, 0, 1),
algorithm = mvtnorm::GenzBretz(maxpts = 1e+05, abseps = 1e-05),
...
)
Arguments
enroll_rate |
Enrollment rates defined by |
fail_rate |
Failure and dropout rates defined by |
fh_test |
A data frame to summarize the test in each analysis. See examples for its data structure. |
ratio |
Experimental:Control randomization ratio. |
binding |
Indicator of whether futility bound is binding;
default of |
upper |
Function to compute upper bound.
|
upar |
Parameters passed to
|
lower |
Function to compute lower bound, which can be set up similarly as |
lpar |
Parameters passed to |
algorithm |
an object of class |
... |
Additional parameters passed to mvtnorm::pmvnorm. |
Value
A list with input parameters, enrollment rate, analysis, and bound.
Specification
The contents of this section are shown in PDF user manual only.
Examples
library(dplyr)
library(mvtnorm)
library(gsDesign)
library(gsDesign2)
enroll_rate <- define_enroll_rate(
duration = 12,
rate = 500 / 12
)
fail_rate <- define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 15, # median survival 15 month
hr = c(1, .6),
dropout_rate = 0.001
)
fh_test <- rbind(
data.frame(rho = 0, gamma = 0, tau = -1, test = 1, analysis = 1:3, analysis_time = c(12, 24, 36)),
data.frame(rho = c(0, 0.5), gamma = 0.5, tau = -1, test = 2:3, analysis = 3, analysis_time = 36)
)
# Example 1 ----
# Minimal Information Fraction derived bound
gs_power_combo(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
fh_test = fh_test,
upper = gs_spending_combo,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_combo,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2)
)
Group sequential bound computation with non-constant effect
Description
Derives group sequential bounds and boundary crossing probabilities for a design. It allows a non-constant treatment effect over time, but also can be applied for the usual homogeneous effect size designs. It requires treatment effect and statistical information at each analysis as well as a method of deriving bounds, such as spending. The routine enables two things not available in the gsDesign package:
non-constant effect, 2) more flexibility in boundary selection. For many applications, the non-proportional-hazards design function
gs_design_nph()
will be used; it calls this function. Initial bound types supported are 1) spending bounds,fixed bounds, and 3) Haybittle-Peto-like bounds. The requirement is to have a boundary update method that can each bound without knowledge of future bounds. As an example, bounds based on conditional power that require knowledge of all future bounds are not supported by this routine; a more limited conditional power method will be demonstrated. Boundary family designs Wang-Tsiatis designs including the original (non-spending-function-based) O'Brien-Fleming and Pocock designs are not supported by
gs_power_npe()
.
Usage
gs_power_npe(
theta = 0.1,
theta0 = 0,
theta1 = theta,
info = 1,
info0 = NULL,
info1 = NULL,
info_scale = c("h0_h1_info", "h0_info", "h1_info"),
upper = gs_b,
upar = qnorm(0.975),
lower = gs_b,
lpar = -Inf,
test_upper = TRUE,
test_lower = TRUE,
binding = FALSE,
r = 18,
tol = 1e-06
)
Arguments
theta |
Natural parameter for group sequential design representing expected incremental drift at all analyses; used for power calculation. |
theta0 |
Natural parameter for null hypothesis, if needed for upper bound computation. |
theta1 |
Natural parameter for alternate hypothesis, if needed for lower bound computation. |
info |
Statistical information at all analyses for input |
info0 |
Statistical information under null hypothesis,
if different than |
info1 |
Statistical information under hypothesis used for
futility bound calculation if different from
|
info_scale |
Information scale for calculation. Options are:
|
upper |
Function to compute upper bound. |
upar |
Parameters passed to |
lower |
Function to compare lower bound. |
lpar |
parameters passed to |
test_upper |
Indicator of which analyses should include
an upper (efficacy) bound;
single value of |
test_lower |
Indicator of which analyses should include a lower bound;
single value of |
binding |
Indicator of whether futility bound is binding;
default of |
r |
Integer value controlling grid for numerical integration as in
Jennison and Turnbull (2000); default is 18, range is 1 to 80.
Larger values provide larger number of grid points and greater accuracy.
Normally, |
tol |
Tolerance parameter for boundary convergence (on Z-scale). |
Value
A tibble with columns as analysis index, bounds, z, crossing probability, theta (standardized treatment effect), theta1 (standardized treatment effect under alternative hypothesis), information fraction, and statistical information.
Specification
The contents of this section are shown in PDF user manual only.
Examples
library(gsDesign)
library(gsDesign2)
library(dplyr)
# Default (single analysis; Type I error controlled)
gs_power_npe(theta = 0) %>% filter(bound == "upper")
# Fixed bound
gs_power_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
upper = gs_b,
upar = gsDesign::gsDesign(k = 3, sfu = gsDesign::sfLDOF)$upper$bound,
lower = gs_b,
lpar = c(-1, 0, 0)
)
# Same fixed efficacy bounds, no futility bound (i.e., non-binding bound), null hypothesis
gs_power_npe(
theta = rep(0, 3),
info = (1:3) * 40,
upar = gsDesign::gsDesign(k = 3, sfu = gsDesign::sfLDOF)$upper$bound,
lpar = rep(-Inf, 3)
) %>%
filter(bound == "upper")
# Fixed bound with futility only at analysis 1; efficacy only at analyses 2, 3
gs_power_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
upper = gs_b,
upar = c(Inf, 3, 2),
lower = gs_b,
lpar = c(qnorm(.1), -Inf, -Inf)
)
# Spending function bounds
# Lower spending based on non-zero effect
gs_power_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfHSD, total_spend = 0.1, param = -1, timing = NULL)
)
# Same bounds, but power under different theta
gs_power_npe(
theta = c(.15, .25, .35),
info = (1:3) * 40,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfHSD, total_spend = 0.1, param = -1, timing = NULL)
)
# Two-sided symmetric spend, O'Brien-Fleming spending
# Typically, 2-sided bounds are binding
x <- gs_power_npe(
theta = rep(0, 3),
info = (1:3) * 40,
binding = TRUE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL)
)
# Re-use these bounds under alternate hypothesis
# Always use binding = TRUE for power calculations
gs_power_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
binding = TRUE,
upar = (x %>% filter(bound == "upper"))$z,
lpar = -(x %>% filter(bound == "upper"))$z
)
# Different values of `r` and `tol` lead to different numerical accuracy
# Larger `r` and smaller `tol` give better accuracy, but leads to slow computation
n_analysis <- 5
gs_power_npe(
theta = 0.1,
info = 1:n_analysis,
info0 = 1:n_analysis,
info1 = NULL,
info_scale = "h0_info",
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_b,
lpar = -rep(Inf, n_analysis),
test_upper = TRUE,
test_lower = FALSE,
binding = FALSE,
# Try different combinations of (r, tol) with
# r in 6, 18, 24, 30, 35, 40, 50, 60, 70, 80, 90, 100
# tol in 1e-6, 1e-12
r = 6,
tol = 1e-6
)
Group sequential design power of binary outcome measuring in risk difference
Description
Group sequential design power of binary outcome measuring in risk difference
Usage
gs_power_rd(
p_c = tibble::tibble(stratum = "All", rate = 0.2),
p_e = tibble::tibble(stratum = "All", rate = 0.15),
n = tibble::tibble(stratum = "All", n = c(40, 50, 60), analysis = 1:3),
rd0 = 0,
ratio = 1,
weight = c("unstratified", "ss", "invar"),
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(0.1), rep(-Inf, 2)),
info_scale = c("h0_h1_info", "h0_info", "h1_info"),
binding = FALSE,
test_upper = TRUE,
test_lower = TRUE,
r = 18,
tol = 1e-06
)
Arguments
p_c |
Rate at the control group. |
p_e |
Rate at the experimental group. |
n |
Sample size. |
rd0 |
Treatment effect under super-superiority designs, the default is 0. |
ratio |
Experimental:control randomization ratio. |
weight |
Weighting method, can be |
upper |
Function to compute upper bound. |
lower |
Function to compare lower bound. |
upar |
Parameters passed to |
lpar |
Parameters passed to |
info_scale |
Information scale for calculation. Options are:
|
binding |
Indicator of whether futility bound is binding;
default of |
test_upper |
Indicator of which analyses should include an upper
(efficacy) bound; single value of |
test_lower |
Indicator of which analyses should include a lower bound;
single value of |
r |
Integer value controlling grid for numerical integration as in
Jennison and Turnbull (2000); default is 18, range is 1 to 80.
Larger values provide larger number of grid points and greater accuracy.
Normally, |
tol |
Tolerance parameter for boundary convergence (on Z-scale). |
Value
A list with input parameter, analysis, and bound.
Examples
# Example 1 ----
library(gsDesign)
# unstratified case with H0: rd0 = 0
gs_power_rd(
p_c = tibble::tibble(
stratum = "All",
rate = .2
),
p_e = tibble::tibble(
stratum = "All",
rate = .15
),
n = tibble::tibble(
stratum = "All",
n = c(20, 40, 60),
analysis = 1:3
),
rd0 = 0,
ratio = 1,
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# Example 2 ----
# unstratified case with H0: rd0 != 0
gs_power_rd(
p_c = tibble::tibble(
stratum = "All",
rate = .2
),
p_e = tibble::tibble(
stratum = "All",
rate = .15
),
n = tibble::tibble(
stratum = "All",
n = c(20, 40, 60),
analysis = 1:3
),
rd0 = 0.005,
ratio = 1,
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# use spending function
gs_power_rd(
p_c = tibble::tibble(
stratum = "All",
rate = .2
),
p_e = tibble::tibble(
stratum = "All",
rate = .15
),
n = tibble::tibble(
stratum = "All",
n = c(20, 40, 60),
analysis = 1:3
),
rd0 = 0.005,
ratio = 1,
upper = gs_spending_bound,
lower = gs_b,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# Example 3 ----
# stratified case under sample size weighting and H0: rd0 = 0
gs_power_rd(
p_c = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.15, .2, .25)
),
p_e = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.1, .16, .19)
),
n = tibble::tibble(
stratum = rep(c("S1", "S2", "S3"), each = 3),
analysis = rep(1:3, 3),
n = c(10, 20, 24, 18, 26, 30, 10, 20, 24)
),
rd0 = 0,
ratio = 1,
weight = "ss",
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# Example 4 ----
# stratified case under inverse variance weighting and H0: rd0 = 0
gs_power_rd(
p_c = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.15, .2, .25)
),
p_e = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.1, .16, .19)
),
n = tibble::tibble(
stratum = rep(c("S1", "S2", "S3"), each = 3),
analysis = rep(1:3, 3),
n = c(10, 20, 24, 18, 26, 30, 10, 20, 24)
),
rd0 = 0,
ratio = 1,
weight = "invar",
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# Example 5 ----
# stratified case under sample size weighting and H0: rd0 != 0
gs_power_rd(
p_c = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.15, .2, .25)
),
p_e = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.1, .16, .19)
),
n = tibble::tibble(
stratum = rep(c("S1", "S2", "S3"), each = 3),
analysis = rep(1:3, 3),
n = c(10, 20, 24, 18, 26, 30, 10, 20, 24)
),
rd0 = 0.02,
ratio = 1,
weight = "ss",
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# Example 6 ----
# stratified case under inverse variance weighting and H0: rd0 != 0
gs_power_rd(
p_c = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.15, .2, .25)
),
p_e = tibble::tibble(
stratum = c("S1", "S2", "S3"),
rate = c(.1, .16, .19)
),
n = tibble::tibble(
stratum = rep(c("S1", "S2", "S3"), each = 3),
analysis = rep(1:3, 3),
n = c(10, 20, 24, 18, 26, 30, 10, 20, 24)
),
rd0 = 0.03,
ratio = 1,
weight = "invar",
upper = gs_b,
lower = gs_b,
upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
Group sequential design power using weighted log rank test under non-proportional hazards
Description
Group sequential design power using weighted log rank test under non-proportional hazards
Usage
gs_power_wlr(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = tibble(stratum = "All", duration = c(3, 100), fail_rate = log(2)/c(9, 18),
hr = c(0.9, 0.6), dropout_rate = rep(0.001, 2)),
event = c(30, 40, 50),
analysis_time = NULL,
binding = FALSE,
upper = gs_spending_bound,
lower = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lpar = list(sf = gsDesign::sfLDOF, total_spend = NULL),
test_upper = TRUE,
test_lower = TRUE,
ratio = 1,
weight = "logrank",
info_scale = c("h0_h1_info", "h0_info", "h1_info"),
approx = "asymptotic",
r = 18,
tol = 1e-06,
interval = c(0.01, 1000),
integer = FALSE
)
Arguments
enroll_rate |
Enrollment rates defined by |
fail_rate |
Failure and dropout rates defined by |
event |
A numerical vector specifying the targeted events at each analysis. See details. |
analysis_time |
Targeted calendar timing of analyses. See details. |
binding |
Indicator of whether futility bound is binding;
default of |
upper |
Function to compute upper bound.
|
lower |
Function to compute lower bound, which can be set up similarly as |
upar |
Parameters passed to
|
lpar |
Parameters passed to |
test_upper |
Indicator of which analyses should include an upper
(efficacy) bound; single value of |
test_lower |
Indicator of which analyses should include an lower bound;
single value of |
ratio |
Experimental:Control randomization ratio. |
weight |
Weight of weighted log rank test:
|
info_scale |
Information scale for calculation. Options are:
|
approx |
Approximate estimation method for Z statistics.
|
r |
Integer value controlling grid for numerical integration as in
Jennison and Turnbull (2000); default is 18, range is 1 to 80.
Larger values provide larger number of grid points and greater accuracy.
Normally, |
tol |
Tolerance parameter for boundary convergence (on Z-scale); normally not changed by the user. |
interval |
An interval presumed to include the times at which
expected event count is equal to targeted event.
Normally, this can be ignored by the user as it is set to |
integer |
Indicator of whether integer sample size and events are intended. This argument is
used when using |
Value
A list with input parameters, enrollment rate, analysis, and bound.
Specification
The contents of this section are shown in PDF user manual only.
Examples
library(gsDesign)
library(gsDesign2)
# set enrollment rates
enroll_rate <- define_enroll_rate(duration = 12, rate = 500 / 12)
# set failure rates
fail_rate <- define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 15, # median survival 15 month
hr = c(1, .6),
dropout_rate = 0.001
)
# set the targeted number of events and analysis time
target_events <- c(30, 40, 50)
target_analysisTime <- c(10, 24, 30)
# Example 1 ----
# fixed bounds and calculate the power for targeted number of events
gs_power_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
event = target_events,
analysis_time = NULL,
upper = gs_b,
upar = gsDesign(
k = length(target_events),
test.type = 1,
n.I = target_events,
maxn.IPlan = max(target_events),
sfu = sfLDOF,
sfupar = NULL
)$upper$bound,
lower = gs_b,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# Example 2 ----
# fixed bounds and calculate the power for targeted analysis time
gs_power_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
event = NULL,
analysis_time = target_analysisTime,
upper = gs_b,
upar = gsDesign(
k = length(target_events),
test.type = 1,
n.I = target_events,
maxn.IPlan = max(target_events),
sfu = sfLDOF,
sfupar = NULL
)$upper$bound,
lower = gs_b,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# Example 3 ----
# fixed bounds and calculate the power for targeted analysis time & number of events
gs_power_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
event = target_events,
analysis_time = target_analysisTime,
upper = gs_b,
upar = gsDesign(
k = length(target_events),
test.type = 1,
n.I = target_events,
maxn.IPlan = max(target_events),
sfu = sfLDOF,
sfupar = NULL
)$upper$bound,
lower = gs_b,
lpar = c(qnorm(.1), rep(-Inf, 2))
)
# Example 4 ----
# spending bounds and calculate the power for targeted number of events
gs_power_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
event = target_events,
analysis_time = NULL,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2)
)
# Example 5 ----
# spending bounds and calculate the power for targeted analysis time
gs_power_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
event = NULL,
analysis_time = target_analysisTime,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2)
)
# Example 6 ----
# spending bounds and calculate the power for targeted analysis time & number of events
gs_power_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
event = target_events,
analysis_time = target_analysisTime,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2)
)
Derive spending bound for group sequential boundary
Description
Computes one bound at a time based on spending under given distributional
assumptions. While user specifies gs_spending_bound()
for use with other
functions, it is not intended for use on its own.
Most important user specifications are made through a list provided to
functions using gs_spending_bound()
.
Function uses numerical integration and Newton-Raphson iteration to derive
an individual bound for a group sequential design that satisfies a
targeted boundary crossing probability. Algorithm is a simple extension of
that in Chapter 19 of Jennison and Turnbull (2000).
Usage
gs_spending_bound(
k = 1,
par = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL,
max_info = NULL),
hgm1 = NULL,
theta = 0.1,
info = 1:3,
efficacy = TRUE,
test_bound = TRUE,
r = 18,
tol = 1e-06
)
Arguments
k |
Analysis for which bound is to be computed. |
par |
A list with the following items:
|
hgm1 |
Subdensity grid from |
theta |
Natural parameter used for lower bound only spending; represents average drift at each time of analysis at least up to analysis k; upper bound spending is always set under null hypothesis (theta = 0). |
info |
Statistical information at all analyses, at least up to analysis k. |
efficacy |
|
test_bound |
A logical vector of the same length as |
r |
Integer value controlling grid for numerical integration
as in Jennison and Turnbull (2000); default is 18, range is 1 to 80.
Larger values provide larger number of grid points and greater accuracy.
Normally |
tol |
Tolerance parameter for convergence (on Z-scale). |
Value
Returns a numeric bound (possibly infinite) or, upon failure, generates an error message.
Specification
The contents of this section are shown in PDF user manual only.
Author(s)
Keaven Anderson keaven_anderson@merck.com
References
Jennison C and Turnbull BW (2000), Group Sequential Methods with Applications to Clinical Trials. Boca Raton: Chapman and Hall.
Examples
gs_power_ahr(
analysis_time = c(12, 24, 36),
event = c(30, 40, 50),
binding = TRUE,
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL),
lower = gs_spending_bound,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL)
)
Derive spending bound for MaxCombo group sequential boundary
Description
Derive spending bound for MaxCombo group sequential boundary
Usage
gs_spending_combo(par = NULL, info = NULL)
Arguments
par |
A list with the following items:
|
info |
Statistical information at all analyses, at least up to analysis k. |
Value
A vector of the alpha spending per analysis.
Examples
# alpha-spending
par <- list(sf = gsDesign::sfLDOF, total_spend = 0.025)
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfLDPocock, total_spend = 0.025)
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfHSD, total_spend = 0.025, param = -40)
gs_spending_combo(par, info = 1:3 / 3)
# Kim-DeMets (power) Spending Function
par <- list(sf = gsDesign::sfPower, total_spend = 0.025, param = 1.5)
gs_spending_combo(par, info = 1:3 / 3)
# Exponential Spending Function
par <- list(sf = gsDesign::sfExponential, total_spend = 0.025, param = 1)
gs_spending_combo(par, info = 1:3 / 3)
# Two-parameter Spending Function Families
par <- list(sf = gsDesign::sfLogistic, total_spend = 0.025, param = c(.1, .4, .01, .1))
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfBetaDist, total_spend = 0.025, param = c(.1, .4, .01, .1))
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfCauchy, total_spend = 0.025, param = c(.1, .4, .01, .1))
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfExtremeValue, total_spend = 0.025, param = c(.1, .4, .01, .1))
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfExtremeValue2, total_spend = 0.025, param = c(.1, .4, .01, .1))
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfNormal, total_spend = 0.025, param = c(.1, .4, .01, .1))
gs_spending_combo(par, info = 1:3 / 3)
# t-distribution Spending Function
par <- list(sf = gsDesign::sfTDist, total_spend = 0.025, param = c(-1, 1.5, 4))
gs_spending_combo(par, info = 1:3 / 3)
# Piecewise Linear and Step Function Spending Functions
par <- list(sf = gsDesign::sfLinear, total_spend = 0.025, param = c(.2, .4, .05, .2))
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfStep, total_spend = 0.025, param = c(1 / 3, 2 / 3, .1, .1))
gs_spending_combo(par, info = 1:3 / 3)
# Pointwise Spending Function
par <- list(sf = gsDesign::sfPoints, total_spend = 0.025, param = c(.25, .25))
gs_spending_combo(par, info = 1:3 / 3)
# Truncated, trimmed and gapped spending functions
par <- list(sf = gsDesign::sfTruncated, total_spend = 0.025,
param = list(trange = c(.2, .8), sf = gsDesign::sfHSD, param = 1))
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfTrimmed, total_spend = 0.025,
param = list(trange = c(.2, .8), sf = gsDesign::sfHSD, param = 1))
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfGapped, total_spend = 0.025,
param = list(trange = c(.2, .8), sf = gsDesign::sfHSD, param = 1))
gs_spending_combo(par, info = 1:3 / 3)
# Xi and Gallo conditional error spending functions
par <- list(sf = gsDesign::sfXG1, total_spend = 0.025, param = 0.5)
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfXG2, total_spend = 0.025, param = 0.14)
gs_spending_combo(par, info = 1:3 / 3)
par <- list(sf = gsDesign::sfXG3, total_spend = 0.025, param = 0.013)
gs_spending_combo(par, info = 1:3 / 3)
# beta-spending
par <- list(sf = gsDesign::sfLDOF, total_spend = 0.2)
gs_spending_combo(par, info = 1:3 / 3)
Group sequential design using average hazard ratio under non-proportional hazards
Description
Group sequential design using average hazard ratio under non-proportional hazards
Usage
gs_update_ahr(
x = NULL,
alpha = NULL,
ustime = NULL,
lstime = NULL,
event_tbl = NULL
)
Arguments
x |
A design created by either |
alpha |
Type I error for the updated design. |
ustime |
Default is NULL in which case upper bound spending time is determined by timing. Otherwise, this should be a vector of length k (total number of analyses) with the spending time at each analysis. |
lstime |
Default is NULL in which case lower bound spending time is determined by timing. Otherwise, this should be a vector of length k (total number of analyses) with the spending time at each analysis. |
event_tbl |
A data frame with two columns: (1) analysis and (2) event,
which represents the events observed at each analysis per piecewise interval.
This can be defined via the |
Value
A list with input parameters, enrollment rate, failure rate, analysis, and bound.
Examples
library(gsDesign)
library(gsDesign2)
library(dplyr)
alpha <- 0.025
beta <- 0.1
ratio <- 1
# Enrollment
enroll_rate <- define_enroll_rate(
duration = c(2, 2, 10),
rate = (1:3) / 3)
# Failure and dropout
fail_rate <- define_fail_rate(
duration = c(3, Inf), fail_rate = log(2) / 9,
hr = c(1, 0.6), dropout_rate = .0001)
# IA and FA analysis time
analysis_time <- c(20, 36)
# Randomization ratio
ratio <- 1
# ------------------------------------------------- #
# Two-sided asymmetric design,
# beta-spending with non-binding lower bound
# ------------------------------------------------- #
# Original design
x <- gs_design_ahr(
enroll_rate = enroll_rate, fail_rate = fail_rate,
alpha = alpha, beta = beta, ratio = ratio,
info_scale = "h0_info",
info_frac = NULL, analysis_time = c(20, 36),
upper = gs_spending_bound,
upar = list(sf = sfLDOF, total_spend = alpha),
test_upper = TRUE,
lower = gs_spending_bound,
lpar = list(sf = sfLDOF, total_spend = beta),
test_lower = c(TRUE, FALSE),
binding = FALSE) %>% to_integer()
planned_event_ia <- x$analysis$event[1]
planned_event_fa <- x$analysis$event[2]
# Updated design with 190 events observed at IA,
# where 50 events observed during the delayed effect.
# IA spending = observed events / final planned events, the remaining alpha will be allocated to FA.
gs_update_ahr(
x = x,
ustime = c(190 / planned_event_fa, 1),
lstime = c(190 / planned_event_fa, 1),
event_tbl = data.frame(analysis = c(1, 1),
event = c(50, 140)))
# Updated design with 190 events observed at IA, and 300 events observed at FA,
# where 50 events observed during the delayed effect.
# IA spending = observed events / final planned events, the remaining alpha will be allocated to FA.
gs_update_ahr(
x = x,
ustime = c(190 / planned_event_fa, 1),
lstime = c(190 / planned_event_fa, 1),
event_tbl = data.frame(analysis = c(1, 1, 2, 2),
event = c(50, 140, 50, 250)))
# Updated design with 190 events observed at IA, and 300 events observed at FA,
# where 50 events observed during the delayed effect.
# IA spending = minimal of planned and actual information fraction spending
gs_update_ahr(
x = x,
ustime = c(min(190, planned_event_ia) / planned_event_fa, 1),
lstime = c(min(190, planned_event_ia) / planned_event_fa, 1),
event_tbl = data.frame(analysis = c(1, 1, 2, 2),
event = c(50, 140, 50, 250)))
# Alpha is updated to 0.05
gs_update_ahr(x = x, alpha = 0.05)
Piecewise exponential cumulative distribution function
Description
Computes the cumulative distribution function (CDF) or survival rate for a piecewise exponential distribution.
Usage
ppwe(x, duration, rate, lower_tail = FALSE)
Arguments
x |
Times at which distribution is to be computed. |
duration |
A numeric vector of time duration. |
rate |
A numeric vector of event rate. |
lower_tail |
Indicator of whether lower ( |
Details
Suppose \lambda_i
is the failure rate in the interval
(t_{i-1},t_i], i=1,2,\ldots,M
where
0=t_0<t_i\ldots,t_M=\infty
.
The cumulative hazard function at an arbitrary time t>0
is then:
\Lambda(t)=\sum_{i=1}^M \delta(t\leq t_i)(\min(t,t_i)-t_{i-1})\lambda_i.
The survival at time t
is then
S(t)=\exp(-\Lambda(t)).
Value
A vector with cumulative distribution function or survival values.
Specification
The contents of this section are shown in PDF user manual only.
Examples
# Plot a survival function with 2 different sets of time values
# to demonstrate plot precision corresponding to input parameters.
x1 <- seq(0, 10, 10 / pi)
duration <- c(3, 3, 1)
rate <- c(.2, .1, .005)
survival <- ppwe(
x = x1,
duration = duration,
rate = rate
)
plot(x1, survival, type = "l", ylim = c(0, 1))
x2 <- seq(0, 10, .25)
survival <- ppwe(
x = x2,
duration = duration,
rate = rate
)
lines(x2, survival, col = 2)
Average hazard ratio under non-proportional hazards
Description
Provides a geometric average hazard ratio under various non-proportional hazards assumptions for either single or multiple strata studies. The piecewise exponential distribution allows a simple method to specify a distribution and enrollment pattern where the enrollment, failure and dropout rates changes over time.
Usage
pw_info(
enroll_rate = define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9)),
fail_rate = define_fail_rate(duration = c(3, 100), fail_rate = log(2)/c(9, 18), hr =
c(0.9, 0.6), dropout_rate = 0.001),
total_duration = 30,
ratio = 1
)
Arguments
enroll_rate |
An |
fail_rate |
A |
total_duration |
Total follow-up from start of enrollment to data cutoff; this can be a single value or a vector of positive numbers. |
ratio |
Ratio of experimental to control randomization. |
Value
A data frame with time
(from total_duration
), stratum
, t
,
hr
(hazard ratio), event
(expected number of events), info
(information under given scenarios), info0
(information under related
null hypothesis), and n
(sample size) for each value of total_duration
input
Examples
# Example: default
pw_info()
# Example: default with multiple analysis times (varying total_duration)
pw_info(total_duration = c(15, 30))
# Stratified population
enroll_rate <- define_enroll_rate(
stratum = c(rep("Low", 2), rep("High", 3)),
duration = c(2, 10, 4, 4, 8),
rate = c(5, 10, 0, 3, 6)
)
fail_rate <- define_fail_rate(
stratum = c(rep("Low", 2), rep("High", 2)),
duration = c(1, Inf, 1, Inf),
fail_rate = c(.1, .2, .3, .4),
dropout_rate = .001,
hr = c(.9, .75, .8, .6)
)
# Give results by change-points in the piecewise model
ahr(enroll_rate = enroll_rate, fail_rate = fail_rate, total_duration = c(15, 30))
# Same example, give results by strata and time period
pw_info(enroll_rate = enroll_rate, fail_rate = fail_rate, total_duration = c(15, 30))
Approximate survival distribution with piecewise exponential distribution
Description
Converts a discrete set of points from an arbitrary survival distribution to a piecewise exponential approximation.
Usage
s2pwe(times, survival)
Arguments
times |
Positive increasing times at which survival distribution is provided. |
survival |
Survival (1 - cumulative distribution function) at specified |
Value
A tibble containing the duration and rate.
Specification
The contents of this section are shown in PDF user manual only.
Examples
# Example: arbitrary numbers
s2pwe(1:9, (9:1) / 10)
# Example: lognormal
s2pwe(c(1:6, 9), plnorm(c(1:6, 9), meanlog = 0, sdlog = 2, lower.tail = FALSE))
Summary for fixed design or group sequential design objects
Description
Summary for fixed design or group sequential design objects
Usage
## S3 method for class 'fixed_design'
summary(object, ...)
## S3 method for class 'gs_design'
summary(
object,
analysis_vars = NULL,
analysis_decimals = NULL,
col_vars = NULL,
col_decimals = NULL,
bound_names = c("Efficacy", "Futility"),
...
)
Arguments
object |
A design object returned by fixed_design_xxx() and gs_design_xxx(). |
... |
Additional parameters (not used). |
analysis_vars |
The variables to be put at the summary header of each analysis. |
analysis_decimals |
The displayed number of digits of |
col_vars |
The variables to be displayed. |
col_decimals |
The decimals to be displayed for the displayed variables in |
bound_names |
Names for bounds; default is |
Value
A summary table (data frame).
Examples
library(dplyr)
# Enrollment rate
enroll_rate <- define_enroll_rate(
duration = 18,
rate = 20
)
# Failure rates
fail_rate <- define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
)
# Study duration in months
study_duration <- 36
# Experimental / Control randomization ratio
ratio <- 1
# 1-sided Type I error
alpha <- 0.025
# Type II error (1 - power)
beta <- 0.1
# AHR ----
# under fixed power
fixed_design_ahr(
alpha = alpha,
power = 1 - beta,
enroll_rate = enroll_rate,
fail_rate = fail_rate,
study_duration = study_duration,
ratio = ratio
) %>% summary()
# FH ----
# under fixed power
fixed_design_fh(
alpha = alpha,
power = 1 - beta,
enroll_rate = enroll_rate,
fail_rate = fail_rate,
study_duration = study_duration,
ratio = ratio
) %>% summary()
# Design parameters ----
library(gsDesign)
library(gsDesign2)
library(dplyr)
# enrollment/failure rates
enroll_rate <- define_enroll_rate(
stratum = "All",
duration = 12,
rate = 1
)
fail_rate <- define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
)
# Information fraction
info_frac <- (1:3) / 3
# Analysis times in months; first 2 will be ignored as info_frac will not be achieved
analysis_time <- c(.01, .02, 36)
# Experimental / Control randomization ratio
ratio <- 1
# 1-sided Type I error
alpha <- 0.025
# Type II error (1 - power)
beta <- .1
# Upper bound
upper <- gs_spending_bound
upar <- list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL)
# Lower bound
lower <- gs_spending_bound
lpar <- list(sf = gsDesign::sfHSD, total_spend = 0.1, param = 0, timing = NULL)
# test in COMBO
fh_test <- rbind(
data.frame(rho = 0, gamma = 0, tau = -1, test = 1, analysis = 1:3, analysis_time = c(12, 24, 36)),
data.frame(rho = c(0, 0.5), gamma = 0.5, tau = -1, test = 2:3, analysis = 3, analysis_time = 36)
)
# Example 1 ----
x_ahr <- gs_design_ahr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
info_frac = info_frac, # Information fraction
analysis_time = analysis_time,
ratio = ratio,
alpha = alpha,
beta = beta,
upper = upper,
upar = upar,
lower = lower,
lpar = lpar
)
x_ahr %>% summary()
# Customize the digits to display
x_ahr %>% summary(analysis_vars = c("time", "event", "info_frac"), analysis_decimals = c(1, 0, 2))
# Customize the labels of the crossing probability
x_ahr %>% summary(bound_names = c("A is better", "B is better"))
# Customize the variables to be summarized for each analysis
x_ahr %>% summary(analysis_vars = c("n", "event"), analysis_decimals = c(1, 1))
# Customize the digits for the columns
x_ahr %>% summary(col_decimals = c(z = 4))
# Customize the columns to display
x_ahr %>% summary(col_vars = c("z", "~hr at bound", "nominal p"))
# Customize columns and digits
x_ahr %>% summary(col_vars = c("z", "~hr at bound", "nominal p"),
col_decimals = c(4, 2, 2))
# Example 2 ----
x_wlr <- gs_design_wlr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
weight = list(method = "fh", param = list(rho = 0, gamma = 0.5)),
info_frac = NULL,
analysis_time = sort(unique(x_ahr$analysis$time)),
ratio = ratio,
alpha = alpha,
beta = beta,
upper = upper,
upar = upar,
lower = lower,
lpar = lpar
)
x_wlr %>% summary()
# Maxcombo ----
x_combo <- gs_design_combo(
ratio = 1,
alpha = 0.025,
beta = 0.2,
enroll_rate = define_enroll_rate(duration = 12, rate = 500 / 12),
fail_rate = tibble::tibble(
stratum = "All",
duration = c(4, 100),
fail_rate = log(2) / 15, hr = c(1, .6), dropout_rate = .001
),
fh_test = fh_test,
upper = gs_spending_combo,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025),
lower = gs_spending_combo,
lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2)
)
x_combo %>% summary()
# Risk difference ----
gs_design_rd(
p_c = tibble::tibble(stratum = "All", rate = .2),
p_e = tibble::tibble(stratum = "All", rate = .15),
info_frac = c(0.7, 1),
rd0 = 0,
alpha = .025,
beta = .1,
ratio = 1,
stratum_prev = NULL,
weight = "unstratified",
upper = gs_b,
lower = gs_b,
upar = gsDesign::gsDesign(
k = 3, test.type = 1, sfu = gsDesign::sfLDOF, sfupar = NULL
)$upper$bound,
lpar = c(qnorm(.1), rep(-Inf, 2))
) %>% summary()
Generates a textual summary of a group sequential design using the AHR method.
Description
Generates a textual summary of a group sequential design using the AHR method.
Usage
text_summary(x, information = FALSE, time_unit = "months")
Arguments
x |
A design object created by |
information |
A logical value indicating whether to include statistical information in the textual summary. Default is FALSE. |
time_unit |
A character string specifying the time unit used in the design. Options include "days", "weeks", "months" (default), and "years". |
Value
A character string containing a paragraph that summarizes the design.
Examples
library(gsDesign)
# Text summary of a 1-sided design
x <- gs_design_ahr(info_frac = 1:3/3, test_lower = FALSE) %>% to_integer()
x %>% text_summary()
# Text summary of a 2-sided symmetric design
x <- gs_design_ahr(info_frac = 1:3/3,
upper = gs_spending_bound, lower = gs_spending_bound,
upar = list(sf = sfLDOF, total_spend = 0.025),
lpar = list(sf = sfLDOF, total_spend = 0.025),
binding = TRUE, h1_spending = FALSE) %>% to_integer()
x %>% text_summary()
# Text summary of a asymmetric 2-sided design with beta-spending and non-binding futility bound
x <- gs_design_ahr(info_frac = 1:3/3, alpha = 0.025, beta = 0.1,
upper = gs_spending_bound, lower = gs_spending_bound,
upar = list(sf = sfLDOF, total_spend = 0.025),
lpar = list(sf = sfHSD, total_spend = 0.1, param = -4),
binding = FALSE, h1_spending = TRUE) %>% to_integer()
x %>% text_summary()
# Text summary of a asymmetric 2-sided design with fixed non-binding futility bound
x <- gs_design_ahr(info_frac = 1:3/3, alpha = 0.025, beta = 0.1,
upper = gs_spending_bound, lower = gs_b,
upar = list(sf = sfLDOF, total_spend = 0.025),
test_upper = c(FALSE, TRUE, TRUE),
lpar = c(-1, -Inf, -Inf),
test_lower = c(TRUE, FALSE, FALSE),
binding = FALSE, h1_spending = TRUE) %>% to_integer()
x %>% text_summary()
# If there are >5 pieces of HRs, we provide a brief summary of HR.
gs_design_ahr(
fail_rate = define_fail_rate(duration = c(rep(3, 5), Inf),
hr = c(0.9, 0.8, 0.7, 0.6, 0.5, 0.4),
fail_rate = log(2) / 10, dropout_rate = 0.001),
info_frac = 1:3/3, test_lower = FALSE) %>%
text_summary()
Round sample size and events
Description
Round sample size and events
Usage
to_integer(x, ...)
## S3 method for class 'fixed_design'
to_integer(x, round_up_final = TRUE, ratio = x$input$ratio, ...)
## S3 method for class 'gs_design'
to_integer(x, round_up_final = TRUE, ratio = x$input$ratio, ...)
Arguments
x |
An object returned by fixed_design_xxx() and gs_design_xxx(). |
... |
Additional parameters (not used). |
round_up_final |
Events at final analysis is rounded up if |
ratio |
Positive integer for randomization ratio (experimental:control). A positive integer will result in rounded sample size, which is a multiple of (ratio + 1). A positive non-integer will result in round sample size, which may not be a multiple of (ratio + 1). A negative number will result in an error. |
Details
For the sample size of the fixed design:
When
ratio
is a positive integer, the sample size is rounded up to a multiple ofratio + 1
ifround_up_final = TRUE
, and just rounded to a multiple ofratio + 1
ifround_up_final = FALSE
.When
ratio
is a positive non-integer, the sample size is rounded up ifround_up_final = TRUE
, (may not be a multiple ofratio + 1
), and just rounded ifround_up_final = FALSE
(may not be a multiple ofratio + 1
). Note the defaultratio
is taken fromx$input$ratio
.
For the number of events of the fixed design:
If the continuous event is very close to an integer within 0.01 differences, say 100.001 or 99.999, then the integer events is 100.
Otherwise, round up if
round_up_final = TRUE
and round ifround_up_final = FALSE
.
For the sample size of group sequential designs:
When
ratio
is a positive integer, the final sample size is rounded to a multiple ofratio + 1
.For 1:1 randomization (experimental:control), set
ratio = 1
to round to an even sample size.For 2:1 randomization, set
ratio = 2
to round to a multiple of 3.For 3:2 randomization, set
ratio = 4
to round to a multiple of 5.Note that for the final analysis, the sample size is rounded up to the nearest multiple of
ratio + 1
ifround_up_final = TRUE
. Ifround_up_final = FALSE
, the final sample size is rounded to the nearest multiple ofratio + 1
.
When
ratio
is positive non-integer, the final sample size MAY NOT be rounded to a multiple ofratio + 1
.The final sample size is rounded up if
round_up_final = TRUE
.Otherwise, it is just rounded.
For the events of group sequential designs:
For events at interim analysis, it is rounded.
For events at final analysis:
If the continuous event is very close to an integer within 0.01 differences, say 100.001 or 99.999, then the integer events is 100.
Otherwise, final events is rounded up if
round_up_final = TRUE
and rounded ifround_up_final = FALSE
.
Value
A list similar to the output of fixed_design_xxx() and gs_design_xxx(), except the sample size is an integer.
Examples
library(dplyr)
library(gsDesign2)
# Average hazard ratio
x <- fixed_design_ahr(
alpha = .025, power = .9,
enroll_rate = define_enroll_rate(duration = 18, rate = 1),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12, hr = c(1, .6),
dropout_rate = .001
),
study_duration = 36
)
x %>%
to_integer() %>%
summary()
# FH
x <- fixed_design_fh(
alpha = 0.025, power = 0.9,
enroll_rate = define_enroll_rate(duration = 18, rate = 20),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12,
hr = c(1, .6),
dropout_rate = .001
),
rho = 0.5, gamma = 0.5,
study_duration = 36, ratio = 1
)
x %>%
to_integer() %>%
summary()
# MB
x <- fixed_design_mb(
alpha = 0.025, power = 0.9,
enroll_rate = define_enroll_rate(duration = 18, rate = 20),
fail_rate = define_fail_rate(
duration = c(4, 100),
fail_rate = log(2) / 12, hr = c(1, .6),
dropout_rate = .001
),
tau = Inf, w_max = 2,
study_duration = 36, ratio = 1
)
x %>%
to_integer() %>%
summary()
# Example 1: Information fraction based spending
gs_design_ahr(
analysis_time = c(18, 30),
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL),
lower = gs_b,
lpar = c(-Inf, -Inf)
) %>%
to_integer() %>%
summary()
gs_design_wlr(
analysis_time = c(18, 30),
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL),
lower = gs_b,
lpar = c(-Inf, -Inf)
) %>%
to_integer() %>%
summary()
gs_design_rd(
p_c = tibble::tibble(stratum = c("A", "B"), rate = c(.2, .3)),
p_e = tibble::tibble(stratum = c("A", "B"), rate = c(.15, .27)),
weight = "ss",
stratum_prev = tibble::tibble(stratum = c("A", "B"), prevalence = c(.4, .6)),
info_frac = c(0.7, 1),
upper = gs_spending_bound,
upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL),
lower = gs_b,
lpar = c(-Inf, -Inf)
) %>%
to_integer() %>%
summary()
# Example 2: Calendar based spending
x <- gs_design_ahr(
upper = gs_spending_bound,
analysis_time = c(18, 30),
upar = list(
sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL,
timing = c(18, 30) / 30
),
lower = gs_b,
lpar = c(-Inf, -Inf)
) %>% to_integer()
# The IA nominal p-value is the same as the IA alpha spending
x$bound$`nominal p`[1]
gsDesign::sfLDOF(alpha = 0.025, t = 18 / 30)$spend
Weight functions for weighted log-rank test
Description
-
wlr_weight_fh
is Fleming-Harrington, FH(rho, gamma) weight function. -
wlr_weight_1
is constant for log rank test. -
wlr_weight_power
is Gehan-Breslow and Tarone-Ware weight function. -
wlr_weight_mb
is Magirr (2021) weight function.
Usage
wlr_weight_fh(x, arm0, arm1, rho = 0, gamma = 0, tau = NULL)
wlr_weight_1(x, arm0, arm1)
wlr_weight_n(x, arm0, arm1, power = 1)
wlr_weight_mb(x, arm0, arm1, tau = NULL, w_max = Inf)
Arguments
x |
A vector of numeric values. |
arm0 |
An |
arm1 |
An |
rho |
A scalar parameter that controls the type of test. |
gamma |
A scalar parameter that controls the type of test. |
tau |
A scalar parameter of the cut-off time for modest weighted log rank test. |
power |
A scalar parameter that controls the power of the weight function. |
w_max |
A scalar parameter of the cut-off weight for modest weighted log rank test. |
Value
A vector of weights.
A vector of weights.
A vector of weights.
A vector of weights.
Specification
The contents of this section are shown in PDF user manual only.
Examples
enroll_rate <- define_enroll_rate(
duration = c(2, 2, 10),
rate = c(3, 6, 9)
)
fail_rate <- define_fail_rate(
duration = c(3, 100),
fail_rate = log(2) / c(9, 18),
hr = c(.9, .6),
dropout_rate = .001
)
gs_arm <- gs_create_arm(enroll_rate, fail_rate, ratio = 1)
arm0 <- gs_arm$arm0
arm1 <- gs_arm$arm1
wlr_weight_fh(1:3, arm0, arm1, rho = 0, gamma = 0, tau = NULL)
enroll_rate <- define_enroll_rate(
duration = c(2, 2, 10),
rate = c(3, 6, 9)
)
fail_rate <- define_fail_rate(
duration = c(3, 100),
fail_rate = log(2) / c(9, 18),
hr = c(.9, .6),
dropout_rate = .001
)
gs_arm <- gs_create_arm(enroll_rate, fail_rate, ratio = 1)
arm0 <- gs_arm$arm0
arm1 <- gs_arm$arm1
wlr_weight_1(1:3, arm0, arm1)
enroll_rate <- define_enroll_rate(
duration = c(2, 2, 10),
rate = c(3, 6, 9)
)
fail_rate <- define_fail_rate(
duration = c(3, 100),
fail_rate = log(2) / c(9, 18),
hr = c(.9, .6),
dropout_rate = .001
)
gs_arm <- gs_create_arm(enroll_rate, fail_rate, ratio = 1)
arm0 <- gs_arm$arm0
arm1 <- gs_arm$arm1
wlr_weight_n(1:3, arm0, arm1, power = 2)
enroll_rate <- define_enroll_rate(
duration = c(2, 2, 10),
rate = c(3, 6, 9)
)
fail_rate <- define_fail_rate(
duration = c(3, 100),
fail_rate = log(2) / c(9, 18),
hr = c(.9, .6),
dropout_rate = .001
)
gs_arm <- gs_create_arm(enroll_rate, fail_rate, ratio = 1)
arm0 <- gs_arm$arm0
arm1 <- gs_arm$arm1
wlr_weight_mb(1:3, arm0, arm1, tau = -1, w_max = 1.2)