Coding, Classification & Reimbursement

Journal of AHIMA CAC Reality Check Productivity

  • 1.  Journal of AHIMA CAC Reality Check Productivity

    Posted 17 days ago
    ​Hi Everyone,

    I have a question regarding the AHIMA article in the June 2019 issue; CAC Reality Check.

    On page 13 it states .."Dolbey client with nine hospitals went from coding 20 inpatient charts per HOUR to 30 charts per HOUR using CAC. Similar improvement in the ER Dept., with  improvement of 100 charts per HOUR to 175 per HOUR with CAC."

    I can't see how this is possible.

    I can see these figures in an eight hour day, depending on the facility but not hourly.

    Regards,

    ------------------------------
    Diane Morel
    Clinical Coding Specialist Ii
    Rhode Island Hospital
    ------------------------------


  • 2.  RE: Journal of AHIMA CAC Reality Check Productivity

    Posted 16 days ago

    That has to be a misprint – that's darn near impossible!

     

    Respectfully,

     

    Lynn

     

    Lynn A. Wall, MBA, RHIT, CCS

    Interim Coding Supervisor

    EM:  lynn.wall@mihs.org

    PH:  610-442-3760

    M-F 8:00 a.m. – 4:00 p.m. (EST)

     






  • 3.  RE: Journal of AHIMA CAC Reality Check Productivity

    Posted 16 days ago
    ​I can only assume that this is  TOTAL OUTPUT/PRODUCTION  for the one client - 9 hospitals;

    My two cents -  Documentation must be top notch, documents must be mapped appropriately and both components must be extremely accurate for good CAC process.  (can not have "problems with the Problem Lists"!)
       You have to balance Quality &  Productivity!





    ------------------------------
    Patti Markunas
    Manager, Coding
    Meritus Medical Center
    ------------------------------



  • 4.  RE: Journal of AHIMA CAC Reality Check Productivity

    Posted 15 days ago
    Hi,
    As the author of the article, I greatly appreciate it when our readers notice details like you have here and keep us on our toes! I contacted my source at Dolbey to seek clarification about their client's improvement rate using CAC. They would like to clarify that the number of charts coded are based on an eight-hour work day. I hope this clarification helps!

    ------------------------------
    Mary Butler
    Associate Editor
    Journal of AHIMA
    ------------------------------



  • 5.  RE: Journal of AHIMA CAC Reality Check Productivity

    Posted 15 days ago
    Hi Everyone,

    Thank you for catching the misprint in our statistics. Although artificial intelligence is extremely beneficial for the output of CAC, you all are right, it certainly cannot code that many charts per hour (maybe one day!) 😊 Our 9 hospital customer realized those significant productivity increases in their inpatient, outpatient, ED and ancillary departments per day, not per hour.

    Feel free to reach out if you have any other questions. Thank you!

    ------------------------------
    Kristi Fahy, RHIA
    Account Executive
    DVS, Dolbey Territory Partner

    kfahy@digital-voice.com
    ------------------------------



  • 6.  RE: Journal of AHIMA CAC Reality Check Productivity

    Posted 15 days ago
    Thank you, Kristi for the clarification.​

    ------------------------------
    Diane Morel
    Clinical Coding Specialist Ii
    Rhode Island Hospital
    ------------------------------



  • 7.  RE: Journal of AHIMA CAC Reality Check Productivity

    Posted 15 days ago
    That still sound inaccurate. What secondary diagnosis are being picked up?

    Anthony Neal




  • 8.  RE: Journal of AHIMA CAC Reality Check Productivity

    Posted 15 days ago
    Hi,
    By an 8 hour work day clarification, are you saying that their productivity is 30 accounts per DAY?
    While this is more in the ballpark, it is still unlikely unless they are not doing any abstract work or checking for health grade codes, ADT accuracy. or attending provider issues.
    The issue I have with articles such as these is coding managers and HIM managers begin to think this level of productivity is achievable at their facility without knowing all the facts. Productivity is also boosted at facilities with managers who implement an "off the record" policy that if a change to correct coding doesn't matter to the final DRG (APR/SOI/ROM or MS-DRG) then don't make the change, don't note the change, and don't  give it back to the coder to be corrected.  So much for accuracy of all the codes submitted and so much for coder improvement through audit.  This makes for coders who trust whatever CAC spits out without a thorough review of the documentation. 
    Quality takes time. It always has and it always will.

    Laura C.. Jones, RHIT, CCS
    ICD-10-CM/PCS Coding Auditor 





  • 9.  RE: Journal of AHIMA CAC Reality Check Productivity

    Posted 10 days ago

    Thank you for your comments and input on this topic. We have seen these kind of productivity gains with our customers however, you are correct, these gains are different for each organization based on a number of criterion. Abstracting, querying and other common coder tasks can all be performed within our CAC, eliminating the need to toggle between multiple systems. Because CDI can also be integrated within our CAC, coders have the beauty of knowing what the CDI specialist did while the patient was in house. These features, in addition to many other tools, have allowed coders to significantly increase their productivity and achieve accurate coding outcomes.


    As for the accuracy of code suggestions, traditional CAC technologies use natural language processing (NLP) to scan the document and provide code suggestions. However, NLP has its limitations-it is rules-based and dependent on the context of the documentation. If the rules aren't written to accommodate the way documentation is written, inaccurate code suggestions may be provided. We, on the other hand, take NLP a step further and utilize artificial intelligence and machine learning to enable far more accurate code suggestions. Essentially we use the AI to analyze documentation and apply coding rules and guidelines from there. The AI is able to understand when it is appropriate to combine codes, how to understand negations etc. Then the machine learning learns from every single experience performed within CAC and overlays those experiences to consistently suggest accurate codes. The more data and information fed through CAC, the more it will learn, and the more accurate it will be for coders to review and validate.

     

    I hope this information is helpful. If you would like to learn more about how this works, feel free to reach out. I would also be more than happy to put you in touch with one of our current customers to confirm these statistics.

    Thank you!



    ------------------------------
    Kristi Fahy, RHIA
    Account Executive
    DVS, Dolbey Territory Partner

    kfahy@digital-voice.com
    ------------------------------