GENERAL INFORMATION
- Developer Name : MedAZ.Net, LLC
- Product Name : Med A-Z
- Version : 202001
- Certification number : 15.05.05.3150.MDAZ.01.00.1.230616
- Plan ID : 20231110maz
- Developer Real World Testing Plan Page URL: https://www.medaz.net/RWT.html
- Certified Criteria:
170.315 (b)(3) Electronic prescribing
170.315 (b)(10) Electronic health information export
170.315 (c)(1) Clinical Quality Measures – Record and Export
170.315 (c)(2) Clinical Quality Measures – Import and Calculate
170.315 (c)(3) Clinical Quality Measures – Report
170.315 (f)(1) Transmission to immunization registries
170.315 (f)(2) Transmission to PHA – syndromic surveillance
170.315 (g)(7) Application access – patient selection
170.315 (g)(10) Standardized API for patient and population services
Justification for Real World Testing Approach
This plan is formulated to collect a holistic image of how the system is used over about one calendar year. Given that this new Med A-Z version is the first iteration of our cloud native EHR system, we felt we needed a simple, long-term framework for testing, to gather data, and better frame subsequent tests.Over the course of the year, we intend to count the number of gross successes and failures, as defined in the measures section below, to establish baseline reliability scores (successes/ (successes + failures)) to each measure. The reliability score will be on a scale of 0 to 1. At each quarterly review, the running reliability scores will be tabulated, and systems with lower scores will be reviewed and revised. Using this framework will allow us to view all data trends globally and target any potential weaknesses in a systematic manner.
Standards Updates (SVAP and USCDI)
Standard (and Version) | N/A |
Updated certification criteria and associated product number | N/A |
CHPL Product Number | 15.05.05.3150.MDAZ.01.00.1.230616 |
Method used for standard update | N/A |
Date of ONC ACB Notification | N/A |
Date of customer notification (SVAP Only) | N/A |
Conformance Measure | N/A |
USCDI updated certification criteria (and USCDI version) | N/A |
Measures Used in Overall Approach
Care Setting:
Med A-Z EHR is designed to be used in a variety of ambulatory care settings. Data from all care settings where the EHR is in use will be recorded. Care settings will be the same across all testing.Measurement Methodology:
The primary modes of measurement for measures b3, b10, f1, f2, g7 and g10 are server-side logs kept in the Med A-Z system. All logs from the data collection period will be examined and the number of successes and failures as defined below for each measure will be counted. For measures c1, c2, and c3, which involve file creation, success and failure will be determined manually based on the definitions below.Metric Name:
Reliability score = Gross Successes / (Gross Successes + Gross Failures)For all measures, a reliability score will be calculated from the gross successes and gross failures. Reliability Score = Gross Successes/ (Gross Failures + Gross Successes). For each measure, gross success and gross failure will be defined below.
Measures:
Associated Criteria:170.315 (b)(3) Electronic Prescribing
● Measurements:- Gross Successes – The count of Surescripts confirmation or waiting responses in server logs.
- Gross Failures – The count of error messages in the server logs
● Justification: By using this measure, we hope to establish a gross success rate, and potentially determine any spot trends during the scheduled review sessions.
● Relied upon Software: SureScripts
● Expected Outcomes: We are expecting reliability scores of .9 or higher for all measures, meaning that they operate reliably 90% of the time or greater.
Associated Criteria:170.315 (b)(10) Electronic Health Information Export
● Measurements:- Gross Successes – The count of “Export files created” success messages in server logs.
- Gross Failures – The count of “file creation failure” error messages in the server logs
● Justification: By using this measure, we hope to establish a gross success rate, and potentially determine any spot trends during the scheduled review sessions.
● Relied upon Software: N/A
● Expected Outcomes: We are expecting reliability scores of .9 or higher for all measures, meaning that they operate reliably 90% of the time or greater.
Associated Criteria:170.315 (c)(1) Clinical Quality Measures – Record and Export
● Measurements:- Gross Successes – The manual count of successful QRDA1 file creation.
- Gross Failures – The manual count of QRDA1 file rejections
● Justification: By using this measure, we hope to establish a gross success rate.
● Relied upon Software: N/A
● Expected Outcomes: We are expecting reliability scores of .9 or higher for all measures, meaning that they operate reliably 90% of the time or greater.
Associated Criteria:170.315 (c)(2) Clinical Quality Measures – Import and Calculate
● Measurements:- Gross Successes – The manual count of times where data is read and imported correctly into Med A-Z System.
- Gross Failures – The manual count of “file read failure” error messages in the server logs.
● Justification: By using this measure, we hope to establish a gross success rate.
● Relied upon Software: N/A
● Expected Outcomes: We are expecting reliability scores of .9 or higher for all measures, meaning that they operate reliably 90% of the time or greater.
Associated Criteria: 170.315 (c)(3) Clinical Quality Measures – Report
● Measurements:- Gross Successes – The manual count of successful file submissions to CMS.
- Gross Failures – The manual count of QRDA3 file rejections
● Justification: By using this measure, we hope to establish a gross success rate.
● Relied upon Software: N/A
● Expected Outcomes: We are expecting reliability scores of .9 or higher for all measures, meaning that they operate reliably 90% of the time or greater.
Associated Criteria: 170.315 (f)(1) Transmission to Immunization Registries
● Measurements:- Gross Successes – The count of “transmission successful” success messages in server logs.
- Gross Failures – The count of “transmission failure” error messages in the server logs
● Justification: By using this measure, we hope to establish a gross success rate, and potentially determine any spot trends during the scheduled review sessions.
● Relied upon Software: N/A
● Expected Outcomes: We are expecting reliability scores of .9 or higher for all measures, meaning that they operate reliably 90% of the time or greater.
Associated Criteria: 170.315 (f)(2) Transmission to Public Health Agencies – Syndromic Surveillance
● Measurements:- Gross Successes – The manual count of confirmation messages from CDC.
- Gross Failures – The manual count of non-responses from CDC.
● Justification: By using this measure, we hope to establish a gross success rate.
● Relied upon Software: N/A
● Expected Outcomes: We are expecting reliability scores of .9 or higher for all measures, meaning that they operate reliably 90% of the time or greater.
Associated Criteria: 170.315 (g)(7) Application Access – Patient Selection
● Measurements:- Gross Successes – The count of “Connection Successful” success messages in server logs.
- Gross Failures – The count of “Connection Failure” error messages in the server logs.
● Justification: By using this measure, we hope to establish a gross success rate, and potentially determine any spot trends during the scheduled review sessions.
● Relied upon Software: N/A
● Expected Outcomes: We are expecting reliability scores of .9 or higher for all measures, meaning that they operate reliably 90% of the time or greater.
Associated Criteria: 170.315 (g)(10) Standardized API for Patient and Population Services
● Measurements:- Gross Successes – The count of “Connection Successful” success messages in server logs.
- Gross Failures – The count of “Connection Failure” error messages in the server logs.
● Justification: By using this measure, we hope to establish a gross success rate, and potentially determine any spot trends during the scheduled review sessions.
● Relied upon Software: N/A
● Expected Outcomes: We are expecting reliability scores of .9 or higher for all measures, meaning that they operate reliably 90% of the time or greater.
Timeline and Milestones for Real World Testing CY 2024
Milestone | Target Date |
Begin data collection as laid out in RWT Plan | January 1 2024 |
Quarterly review 1 | April 1 2024 |
Quarterly review 2 | July 1 2024 |
Quarterly review 3 | Oct 7 2024 |
End data capture for all measures | December 20 2024 |
Review data collected for all measures, and finalize results for report | December 28 2024 |
Prepare results report | January 2025 |
Submit Real World Testing Results to ACB | January 2025 |
Developer Attestation
Attestation
This Real-World Testing plan is complete with all required elements, including measures that address all certification criteria and care settings. All information in this plan is up to date and fully addresses the health IT developer’s Real World Testing requirements.Authorized Representative Name: | Vasu Iyengar |
Authorized Representative Email: | medazsupport@medaz.net |
Authorized Representative Phone: | +1 (609) 716-6991 |
Authorized Representative Signature: | Vasu Iyengar |
Date: | 11/10/2023 |