Predictive Dispatch of Volunteer First Responders: Algorithm Development and Validation


IntroductionBackground

Emergency response apps, commonly smartphone based, are increasingly being used to identify and dispatch volunteer first responders (VFRs) to the location of a medical emergency []. Automated dispatch algorithms generally rely on a simple estimated time of arrival (ETA) calculation based on the locations of the VFRs and the incident as well as the known modes of transport. A key aspect lacking in these algorithms is a consideration of the likelihood of response; for instance, given a set of potential VFRs with equivalent ETAs, which subset should be alerted to maximize the likelihood of response? The automated dispatch of VFRs to medical emergencies is suboptimal owing to a large percentage of alerts wasted on VFRs with shorter ETA but a low likelihood of response. This results in delays until a volunteer who will actually respond can be identified and dispatched. Using actual demographic and response data taken from a 12-month study of 112 VFRs alerted to respond to opioid overdose emergencies, we applied a series of analytical methods and advanced classification models to learn and predict volunteer response behaviors. Our findings can be used to improve dispatch algorithms in VFR networks to optimize dispatch decisions and increase the likelihood of timely emergency responses.

Medical Emergencies

A medical emergency is an acute injury or illness that can result in death or long-term health complications []. Some common medical emergencies include out-of-hospital cardiac arrest (OHCA), severe trauma, opioid overdose, and anaphylaxis. OHCA is a leading cause of death worldwide [], with a poor survival rate (only 5.6% in adults) []. Major trauma is the sixth leading cause of death worldwide [,]. Opioid overdose is a severe public health problem that has been consistently rising for the past 20 years and in the United States is the leading cause of accidental death []. The incidence of anaphylaxis ranges from 1.5 to 7.9 per 100,000 population per year in Europe [,].

Networks of VFRs

The immediate provision of first aid is crucial in lowering mortality and improving long-term prognosis, particularly in regard to OHCA [-] and opioid overdose events []. Emergency medical services (EMS) are the primary first aid provider [,], but EMS response times vary significantly among countries and geographies [,]. Interventions to achieve faster response times include the deployment of automatic external defibrillators (AEDs) in public places [-] and the establishment of local networks of VFRs [-]. Recently, there was a concerted effort to use smartphone apps for faster emergency response, such as PulsePoint, HelpAround, Heartrunner, and UnityPhilly. An extensive review of emergency response apps can be found in the study by Gaziel-Yablowitz and Schwartz [].

An emergency response community (ERC) [], a subtype of a VFR network, is a social network of patients who are prescribed to carry life-saving medication for themselves and can potentially help other patients who are without their medication in a medical emergency. Two projects that apply the ERC approach are the subjects of recent field studies: EPIMADA, which focused on patients at risk of anaphylaxis and their parents []; and UnityPhilly, which focuses on people who have experienced an opioid overdose [].

Willingness to Respond, Barriers, and Facilitators

Once a person becomes a volunteer, they are expected to respond if available when a relevant event occurs. However, the actual rates of response to emergency alerts are far from 100%. Brooks et al [] reported a response rate of 23% among PulsePoint volunteers. In a recent study, the willingness of cardiopulmonary resuscitation (CPR)–trained bystanders to respond to an OHCA event was 46.6% []. Another study analyzed barriers to receiving notifications and reported that 32% of the responders who were sent notifications did not receive the notification because, for example, they were away from their device (21%), their device was switched off (8%), or their device was out of network range (4%) []. Stress levels among responders varied for different medical conditions, different locations, and different demographic groups []. Younger age, higher education level, shorter time since the last CPR training, and cardiac arrest event in a public location were good predictors of bystanders’ greater willingness to perform CPR. The main reasons for not performing CPR were panic, the perception of bystanders that they are not able to perform CPR correctly, and a fear of hurting the patient []. Familial experiences of receiving CPR were associated with an increase in responders’ willingness to perform CPR []. The UnityPhilly study, which established a network of volunteers to provide naloxone to those experiencing an opioid overdose, reported that 17% of the alerted volunteers accepted the alert, and 11.9% of the alerted volunteers arrived at the scene [].

Dispatch Algorithms and Decision-MakingComplexity of VFR Dispatch and Decision-Making

The complexity of VFR dispatch stems from 2 sources: unknown resource location and uncertain response. Emergency response services that try to optimize their own resources to maximize their effectiveness can determine the allocation of their resources, such as ambulance dispatch stations or police patrol districts, subject to constraints (eg, budgets) [-]. The administrators of a VFR network are unable to plan and control the location of their resources because VFRs perform their daily activities until called to action: they can be anywhere, enter and exit the area that the network covers, switch on and off their mobile phones, and so on. In addition, although ambulance staff or a police patrol are expected to respond to any event that they are dispatched to, VFRs decide for themselves whether to respond to a specific event.

Usual Location–Based Dispatch Approach Using Pagers and SMS Text Messages

In a typical location-based approach, VFRs are alerted based on their usual location (eg, home or work address) and not their actual location at the moment of the alert. VFRs may not provide any feedback to the system regarding their availability to respond to the specific event and just show up on the scene if they can; for example, this approach was used by Zijlstra et al [] who sent SMS text messages to volunteers living within a 1000-meter radius of an OHCA event.

Current Location–Based Dispatch Approach

A current location–based dispatch approach is based on a smartphone app that continuously sends VFRs’ locations (eg, geospatial coordinates) to a central server. When an emergency event is registered in the system, the dispatch algorithm selects volunteers based on their distance from the scene or, in a more advanced version, based on their ETA []. Such apps can also allow VFRs to set their availability status to control for their commitment, which was found to be an important factor of VFRs’ willingness to volunteer []. Location-based dispatch is widely used in VFR networks [,,,]. Usually, location-based algorithms dispatch >1 volunteer, if available, but still limit the number of volunteers who are dispatched to prevent burnout and a decrease in self-efficacy. Sending a large number of responders to each event can lead to the “diffusion of responsibility” phenomenon and reduce willingness to respond [].

Autonomous Dispatch Versus EMS-Mediated Dispatch

Some VFR networks are managed by EMS and are integrated into their business processes. In this case, the dispatch of VFRs is at the discretion of a human dispatcher, and the VFR system serves as a decision support system that provides the dispatcher with the necessary information, such as location and ETA, of volunteers that can be compared with the location and ETA of an ambulance. Once alerts are sent, the system constantly updates its recommendations based on the feedback from the alerted volunteers. This approach is used by the Life Guardians project managed by Israeli National EMS [] and in several AED and CPR projects [].

An alternative approach is autonomous dispatch, where VFRs are selected and alerted by an autonomous system according to a predefined business logic. The system can dispatch additional volunteers if the alerted volunteers ignore the alert, refuse to respond, or linger on the way. This approach was used by the UnityPhilly project [] and the PulsePoint project [].

Both approaches can be either registered (usual or expected) location based or current (dynamic) location based; for example, UnityPhilly uses a current location–based autonomous dispatch approach.

Integration of Volunteers’ Feedback Into Dispatch Algorithms

Many smartphone apps for VFR networks allow alerted responders to accept or decline the alert. Such feedback lowers the uncertainty regarding the dispatcher, and, if a volunteer declines an alert, the dispatch algorithm can reconsider the selection of responders and send additional alerts to substitute volunteers (ie, to volunteers who were not initially selected by the algorithm [eg, because they had a longer ETA] but who, in the event that ≥1 of the initially selected volunteers decline or ignore the alert, can be dispatched to achieve the target number of responders). If an alerted volunteer ignores the alert and does not provide any feedback, the system waits for a set period of time and then considers the nonresponse a “no” and acts accordingly. depicts this process.

Figure 1. The dispatch process and feedback from alerted volunteers. Profiling

Profiling is “the process of generating profiles from obtained data, associated to one or multiple subjects” []. Profiling of people is widely used in several areas, such as targeted advertising [], donation solicitation [], and volunteer recruitment []. Elsner et al [,] proposed to use the profiling of volunteers in dispatch algorithms to enhance the prediction of the volunteers’ position, trajectory, and constraints. In this study, we used classification techniques to generate different behavioral profiles of volunteers that serve as independent variables for predicting responses to alerts.

The Purpose of the Study

The challenge of improving volunteer dispatch speed and response rates is recognized in fields ranging from food rescue operations [] to OHCA response in which the optimization of the responder network is now taking center stage [,]. Studies such as the one by Gregers et al [] have attempted to determine the optimal number of responders to dispatch, yet such studies base response viability solely on current ETA with no consideration of responder history or other characteristics that could improve responsiveness. Currently used dispatch algorithms that select volunteers based on their ETA without considering the likelihood of response may be suboptimal owing to a large percentage of alerts wasted on VFRs with shorter ETA but a low likelihood of response. We build on prior work on VFR optimization by presenting a novel approach for predicting whether a VFR will respond to, or ignore, a given alert. As such, the enhanced algorithm reduces the time that the system unnecessarily spends waiting for a response from volunteers who are likely to ignore the alert. The amount of time wasted depends on the specific dispatch algorithm; for example, in UnityPhilly trials, the system waited 2 minutes before dispatching a substitute volunteer. A faster dispatch of substitute volunteers has the potential to reduce the response time of the VFR network as a whole and improve its effectiveness. However, overdispatch of more VFRs than necessary to secure an effective emergency response can have a negative impact on future willingness to respond [].


MethodsData

We used data from the UnityPhilly study that piloted a smartphone-based app for requesting and providing ERC assistance to those suspected of experiencing an opioid overdose in the neighborhood of Kensington, PA, over 12 months from March 1, 2019, to February 28, 2020. Kensington has Philadelphia’s highest concentration of overdose deaths and is also home to Prevention Point Philadelphia, which is a city-sanctioned syringe exchange program that also distributes naloxone and provides naloxone training. Recruitment occurred via face-to-face screening at Prevention Point’s drop-in center, Prevention Point’s substance use disorder treatment van, street intercepts, and chain referrals from enrolled participants. The inclusion criteria for participants were that they lived, worked, or used drugs within 4 zip codes around the Kensington neighborhood (19122, 19125, 19133, and 19134); possessed a smartphone with a data plan; were willing to have location and movements tracked via an app; were willing to carry naloxone; and were aged ≥18 years. Sampling purposefully targeted a mix of members of the Kensington community who used opioids nonmedically in the past 30 days and those who reported no nonmedical opioid use in the past 30 days. The study recruited 112 volunteers who were almost equally divided between people who reported opioid use in the past 30 days at baseline (n=57, 50.9%) and community members, that is, people who reported no opioid use at baseline (n=55, 49.1%).

At a research storefront in Kensington, the study enrollment procedure included obtaining written informed consent, the recording of contact information, structured baseline interviews, app installation and training, and naloxone distribution and training. During the informed consent procedure, participants agreed to participate in a baseline interview, monthly follow-up interviews, and brief surveys after overdose incidents. Project staff installed the app on the participant’s smartphone and provided app training, which included watching an animated training video explaining app use and practicing using the app to send and receive alerts with project staff. Naloxone training included recognizing the signs of opioid overdose, practicing rescue breathing on a CPR dummy, and demonstrating how to administer intranasal naloxone. All participants received a kit containing 2 doses of intranasal naloxone. The UnityPhilly app enabled them to report opioid overdose events and to receive notifications about opioid overdose events reported by other members in their proximity. Participants received US $25 in cash for the baseline interview and US $5 for each completed follow-up monthly interview or incident survey. No compensation was offered or given for the use of the app to signal or respond to overdose incidents. More details about the study are available in prior publications [].

The data used for this analysis consist of 4 components ( and ).

Of the 112 volunteers recruited to UnityPhilly, 27 (24.1%) were completely inactive as either signaler or responder (ie, they did not send or respond to a single alert). Of the remaining 85 volunteers, 80 (94%) received at least 1 alert and were defined as responders, and 52 (61%) who signaled at least 1 event were defined as signalers (many volunteers served in both roles). presents the distribution of responders and signalers.

Events that were canceled by the signaler for any reason were considered false alarms. For this analysis, we excluded these events because we were not able to distinguish between alarms ignored by the responder and alarms that were canceled before the responder had a chance to respond. describes the sample.

We used alerts as a unit of analysis.

Textbox 1. The 4 components of the data used for analysis.

Event

This refers to an opioid overdose event. An event’s characteristics are true or false alarm, signaler, weekday or weekend, and day or night.

Signaler

This refers to a UnityPhilly user who witnesses an event and reports it to the system using the UnityPhilly app. A signaler’s characteristics are age, gender, housing status, employment status, naloxone carriage adherence before joining the UnityPhilly community, opioid overdose witnessing experience before joining the UnityPhilly community, and experience in administering naloxone to a person experiencing an overdose before joining the UnityPhilly community.

Responder

This refers to a UnityPhilly member who is selected by the UnityPhilly system (based on their location and estimated time of arrival) and notified in their UnityPhilly app about an event. The responder’s characteristics are the same as those of the signaler.

Alert

This refers to a notification sent to a specific responder about a specific event. The UnityPhilly app enables the responder to accept or decline an alert. However, many alerts are ignored, that is, neither accepted nor declined. An alert’s characteristics are distance between the potential responder and the event scene at the moment of the alert, the number of previous alerts received by the responder since joining UnityPhilly, the number of previous false alerts received by the responder since joining, the number of previous alerts received by the responder since joining that were initiated by the same signaler, the number of previous false alerts received by the responder since joining that were initiated by the same signaler, the number of previous responses by the responder since joining, the number of previous responses to false alerts received by the responder since joining, and the number of previous responses to false alerts initiated by the same signaler that were received by the responder since joining.Figure 2. Entities in the UnityPhilly data set. M: many. Figure 3. Distribution of responders and signalers in the UnityPhilly data set. Figure 4. Sample used for this study. M: many. Analytical Approach

We used multiple analytical methods to classify the behavior of each volunteer identified as being in the proximity of an overdose event. We integrated data on specific volunteers and events into the dispatch algorithm in such a way that for each dispatched volunteer who is most likely to ignore the alert, an additional volunteer is dispatched right away (if available), until the maximum number of volunteers to be dispatched is reached, or no more volunteers are available. Volunteers for whom the algorithm predicts a low probability of response are still dispatched and thus are given the chance to respond. depicts this process.

We tested 4 models, based on different configurations of variables, to predict whether a given responder is likely to respond to a given event ( and ).

Figure 5. Integration of the probability to respond into the dispatch algorithm. ETA: estimated time of arrival. Textbox 2. The 4 models tested in this study.

Model 1

This model is based solely on historic events and alerts data, incorporating no other data related to the potential responders.

Model 2

This model is based on the events and alerts data, but it also integrates data on the responders’ patterns of behavior through their previous experience in the volunteer first responder network, including previous alerts and false alerts, and previous responses, including responses to false alerts.

Model 3

This model is based on the events and alerts data, as well as respondents’ personal and demographic data, and ignores their previous experience in the network.

Model 4

This model is based on the events and alerts data, as well as respondents’ personal and demographic data, and dynamically calculates thefrequent responderindicator that represents the responder’s experience in the community before a specific alert. This indicator was calculated as follows:<6 alerts: no6-10 alerts and response rate ≥50%: yes11-20 alerts and response rate ≥40%: yes21-30 alerts and response rate ≥30%: yes≥31 alerts and response rate ≥25%: yesOtherwise: noTable 1. Data used in each model.DataModel
1234
Events and alerts data (weekday or weekend, day or night, and distance [m])✓✓✓✓Responder’s previous experience in UnityPhilly (previous alerts, previous false alerts, previous alerts by the same signaler, previous false alerts by the same signaler, previous responses, previous responses to false alerts, and previous responses to false alerts by the same signaler)

✓Responders’ demographic data (age, gender, housing status, and employment status)

✓✓Responders’ condition-specific characteristics (naloxone carriage adherence, history of witnessing opioid overdoses before joining UnityPhilly, and history of administering naloxone before joining UnityPhilly)

✓✓Frequent responder indicator (recalculated after each alarm)


✓Classification

The classification analysis for all models was conducted using four classification algorithms suitable for binary classification: (1) the J48 decision tree algorithm, which is an extension of the C4.5 algorithm, implemented in Weka software (University of Waikato) used in the research; (2) random forest; (3) neural network (multilayer perceptron); and (4) logistic regression. The J48 algorithm creates univariate decision trees for classification and provides effective alternatives to other classification methods. The choice of the best classification model is based on the combination of different evaluation metrics. The main interest was to identify the model that succeeds in correctly classifying any answer class.

We used 4 evaluation metrics: accuracy, F-score, precision, and recall. Accuracy is the overall percentage of correctly classified instances. The F-score is the harmonic mean of the recall and precision metrics and can take values ranging between 0 (none of the instances were correctly classified) and 1 (all instances were correctly classified). Precision is the percentage of true positively classified instances out of all positively classified instances. Recall is the percentage of positively classified instances out of all positive instances. The best way to explain the trends found in this analysis is to explain the differences in the recall metric among the different classification algorithms and among the different classes.

Because of the relatively small overall number of cases in the data set, we did not use a percentage split for the training and test sets for models 1 to 3; instead, we used a cross-validation option with 10 folds. Model 4 includes the additional synthetic dichotomous variable called frequent responder that reflects the previous behavior of the responder. The variable is dynamically updated; therefore, a responder can change their behavior several times throughout the research period—from being active to inactive or vice versa. As the frequent responder variable cannot be treated as an independent sequence of values and behavioral patterns that must be preserved, there is no option to use cross-validation for classification analysis. For this reason, we split the data set into a training set with 66.9% (664/993) of the data and a test set with 33.1% (329/993) of the data.

All 4 algorithms were used for a binary classification task in a baseline analysis that included only the events and alerts data (model 1 in ). The obtained results provide the baseline for comparison with the additional data related to the responder’s previous experience data (models 2, 3, and 4 in ). We claim that building a model that considers the responders’ behavioral characteristics can improve the use of the dispatch algorithm. In this kind of analysis, precision in predicting nonresponse is more important than precision in predicting response because in the former case a mistake will delay the dispatch of a substitute responder, whereas in the latter case a mistake will result in the dispatch of too many volunteers.

The comparison between all classification techniques and all evaluation metrics for the 4 models is presented in .

Ethical Considerations

All study procedures were approved by the Drexel University Institutional Review Board and registered with ClinicalTrials.gov (NCT03305497). Study enrollment included written informed consent. All data used for this research were deidentified. Participants received US $25 in cash for the baseline interview and US $5 for each completed follow-up monthly interview or incident survey. No compensation was offered or given for use of the app to signal or respond to overdose incidents.


Results

The results of this study are derived from an analysis of emergency events, volunteer participants’ demographics, and behavior patterns.

Description of the Sample

presents the characteristics of overdose events.

presents the distribution and correlation of the responders’ characteristics. Cramér V was used for categorical variables, and Spearman ρ was used for ordinal variables. ANOVA tests for age differences among the different subgroups of categorical or ordinal variables did not reveal any significant differences at the 5% significance level.

Table 2. Description of overdose event characteristics (n=188).VariablesValuesWeekdays and weekends, n (%)a
Weekday136 (72.3)
Weekend52 (27.7)Days and nights, n (%)a
Day133 (70.7)
Night55 (29.3)Distance (meters; n=162b), mean (SD); median (IQR)c3326 (2784); 2595 (955.09-5567.75)

aCramér correlation between weekday/weekend and day/night is 0.006.

bFor 26 (13.8%) of the 188 overdose events, distance data were not available.

cDistance during weekdays: mean 3611 (SD 2871) meters; distance during weekends: mean 2537 (SD 2384) meters; P=.03; distance during the day: mean 3507 (SD 2724) meters; distance during the night: mean 2870 (2910) meters; P=.19.

Table 3. Distribution and correlation of responders’ characteristics (n=80).VariableValues, n (%)GenderNaloxone carriage adherenceHomelessnessEmploymentHistory of witnessing an opioid overdoseHistory of administering naloxoneAgeAgea
r—b0.070.180.130.14−0.08−0.071
P value—.54.12.26.220.52.54—Gender
r—10.250.420.180.190.140.07
P value——.27<.001.25.35.58.54
Male35 (44)———————
Female44 (55)———————
Intersex1 (1)———————Naloxone carriage adherence
r—0.2510.370.27−0.18−0.210.18
P value—.27—.02.16.05.05.12
All the time36 (45)———————
Often22 (28)———————
Sometimes10 (13)———————
Seldom2 (3)———————
Never10 (13)———————Homelessness
r—0.420.3710.370.220.090.13
P value—<.001.02—.004.16.46.26
Homeless22 (28)———————
Not homeless58 (73)———————Employment
r—0.180.270.3710.150.140.14
P value—.25.16.004—.16.58.22
Part time11 (14)———————
Full time18 (23)———————
Unemployed51 (64)———————History of witnessing an opioid overdose (number of times)
r—0.19−0.180.220.1510.63−0.08
P value—.35.05.16.16—<.001.52
≤2048 (60)———————
21-4020 (25)———————
>4012 (15)———————History of administering naloxone (number of times)
r—0.14−0.210.090.140.631−0.07
P value—.58.05.46.58<.001—.54
≤2061 (81)———————
21-407 (9)———————
>407 (9)———————

aAge (y): mean 40.31 (SD 10.41); median 39.5 (IQR 32-47.75).

bNot applicable.

Significant correlations were found between gender and homelessness (P<.001) as well as between history of witnessing an opioid overdose and history of administering naloxone (P<.001).

Response Patterns

and present how the alerted volunteers responded (true alarms only; n=993). Responders could change their decision.

Textbox 3. Volunteers’ response patterns.

No answer

Responder ignored the alert. This was the final status in 60.3% (599/993) of the alerts.

No go

Responder notified the system that they are not able to respond. This was the final status in 23% (228/993) of the alerts.

En route

Responder notified the system that they are on the way to the scene. This was the final status in 5.1% (51/993) of the alerts.

On scene

Responder notified the system that they are on the scene. This status can be set automatically by the system (based on the responder’s location) or manually by the responder. This was the final status in 2.6% (26/993) of the alerts.

Done

Responder performed the treatment. This was the final status in 7.9% (79/993) of the alerts.

Canceled dispatch

This was the final status in 1% (10/993) of the alerts.Figure 6. Response patterns (refer to Textbox 3 for an explanation of the terms used in this figure). Classification Analysis of Response Patterns

presents the ability of each model to predict the responder’s behavior. To compare model 4 with the other models, all models were tested using the test set of alerts (n=329).

For the test set, model 4 provided the best classification accuracy both overall and for ignored alerts. Model 3 provided the same classification accuracy for ignored alerts, slightly lower accuracy overall, and lower accuracy for answered alerts. Model 2 provided the best classification accuracy for ignored alerts; however, its accuracy was lower overall and significantly lower for answered alerts. Model 1’s classification accuracy was the lowest.

presents the ability of models 1 to 3 to classify the responder’s behavior, using the full data set (n=993).

For the full set, model 3 provided the best classification accuracy. Model 2 had similar accuracy for ignored events and lower accuracy both overall and for answered events. Model 1’s classification accuracy was the lowest. Model 4 was not tested with the full set because the construction of the frequent responder variable requires training.

presents the J48 decision tree for model 4 for the test set.

Figure 7. Classification accuracy of models 1 to 4 using the test set (n=329). Figure 8. Classification accuracy of models 1 to 3 using the full set (n=993). Figure 9. J48 decision tree for model 4 for the test set.

The analysis of the classification tree reveals 5 possible routes to the response result: infrequent responders aged >54 years, frequent responders who administered naloxone <20 times, male frequent responders who administered naloxone 21 to 40 times, fully employed female frequent responders who administered naloxone 21 to 40 times, and unemployed female frequent responders who administered naloxone 21 to 40 times in situations where the distance to the scene was <272 meters.

We have to remember that the overall accuracy is not very high and that there are false-positive and false-negative statistical errors in the classification output. A false-positive error occurs when the ignored alert is classified as a responded alert, and a false-negative error occurs when the responded alert is classified as an ignored alert.

Potential Time Savings

Substitute responders (responders who were not initially selected by the algorithm) were used in 73.4% (138/188) of the events. Substitute responders received 33.6% (334/993) of the alerts. presents the lengths of the delays (in min) before substitute responders were dispatched.

Figure 10. Time before substitute dispatch (n=334). Factors Affecting Willingness to Respond to an Opioid Overdose Event

presents the analysis of differences between alerts that were ignored and alerts that resulted in some responses (en route, no go, or on scene).

Significant differences between responded alerts and ignored alerts were found for the following variables: gender (higher response rate by male volunteers; P=.05), naloxone carriage adherence (P<.001), employment (higher response rate by volunteers who were unemployed; P<.001), age (slightly higher average age among volunteers who responded; P=.003), the number of previous alerts (higher among volunteers who responded; P=.003), previous false alerts (higher among volunteers who responded; P=.003), previous false alerts by the same signaler (lower among volunteers who responded; P=.02), previous responses (higher among volunteers who responded; P<.001), and previous responses to false alerts (higher among volunteers who responded; P<.001).

Table 4. Differences between responded alerts and ignored alerts (n=993).VariableResponded alerts (n=394)Ignored alerts (n=599)P valueWeekdays and weekends, n (%).49a
Weekday289 (73.4)451 (75.3)

Weekend105 (26.6)148 (24.7)
Days and nights, n (%).44a
Day282 (71.6)415 (69.3)

Night112 (28.4)184 (30.7)
Sex, n (%).05a
Male182 (46.2)239 (39.9)

Female212 (53.8)360 (60.1)
Naloxone carriage adherence, n (%)<.001a,b
All the time115 (29.2)241 (40.2)

Most of the time174 (44.2)150 (25)

Sometimes29 (7.4)59 (9.8)

Seldom4 (1)17 (2.8)

Never72 (18.3)132 (22)
Homelessness, n (%).18a
Yes68 (17.3)124 (20.7)

No326 (82.7)475 (79.3)
Employment, n (%)<.001a
Part time18 (4.6)70 (11.7)

Full time70 (17.8)110 (18.4)

Unemployed306 (77.7)419 (69.9)
Age (y), mean (SD)42.91 (11.86)40.47 (13.11).003cPrevious alerts, mean (SD)25.00 (20.96)21.09 (20.31).003cPrevious alerts by the same signaler, mean (SD)3.35 (5.12)3.52 (5.58).63cPrevious false alerts, mean (SD)7.70 (7.00)6.39 (6.46).003cPrevious false alerts by the same signaler, mean (SD)0.70 (1.35)0.96 (2.03).018cPrevious responses, mean (SD)14.27 (14.01)6.74 (10.40)<.001cPrevious responses to false alerts, mean (SD)1.39 (1.85)0.82 (1.59)<.001cPrevious false alerts by the same signaler, mean (SD)0.13 (0.42)0.13 (0.50).89cDistance, mean (SD)1947.24 (2290.24)1726.98 (2127.02).16cHistory of witnessed overdoses (number of times), n (%).54a
≤20247 (62.7)382 (63.8)

21-4080 (20.3)106 (17.7)

>4067 (17)111 (18.5)
History of naloxone administration (number of times), n (%).07a
≤20299 (86.4)453 (80.6)

21-4014 (4)38 (6.8)

>4033 (9.5)71 (12.6)

aP value for the chi-square test for the test of independence.

bP value for the Kendall τ test for ordinal variables.

cP value for the 2-tailed independent samples t test.


留言 (0)

沒有登入
gif