Latest News

Situational Awareness: There is More to Fire Scene Safety Beyond NFPA 921

Source: David Harlow, Principal Consultant, Fire and Explosion, Envista Forensics

Whether you are an insurance adjuster or fire investigator, when out on a loss site, such as a fire scene, it is important to always be aware of your surroundings and maintain situational awareness, the conscious knowledge of the immediate environment and the events that are occurring in it. Situation awareness involves the perception of the elements in the environment, comprehension of what they mean and how they relate to one another, and how you may need to act or react, but more than that, situational awareness is about always keeping yourself and your peers safe through mental preparation. 

Situational awareness can be understood as a collection of skills needed to set limits in circumstances that may make us uncomfortable or are possibly even dangerous. It is an awareness of the environment and a basic understanding of how to avoid potentially dangerous situations. When we have a good understanding of situational awareness and we know our mental abilities to survey and understand our surroundings, we gain self-esteem and confidence to trust our instincts.

Staying Safe On-site, After A Fire

In both the public and private sectors, new generations of insurance claims adjusters and investigators are entering the workforce, and as they enter, they are faced with new risks; risks with today’s technology, with public safety, and with issues such as climate change, as well as the ongoing impacts of a global pandemic.  Amid all these risks, safety when on a loss investigation remains a paramount concern.

In the second edition of the International Association of Arson Investigators (IAAI) Health and Safety Committee’s manual, Fire Investigator Health and Safety Best Practices, published in May of 2020, there are no recommendations or guidelines as to how fire investigators need to have situational awareness and maintain control of the area where they're working during uncertain circumstances.  Additionally, both past and current editions of the NFPA 921 have a chapter dedicated to scene safety, but has very little information addressing potential health and safety events that may occur after the loss, when the site is being investigated.

Whether you have been a part of fire scene investigations for 30 years or three weeks, the tips below are essential to comprehend and utilize for maintaining situational awareness and keeping yourself and others safe on loss sites.

When possible, attend site inspections with others. This may mean other adjusters, investigators, or law enforcement professionals. Keep in mind that anytime you go into a new area, you may be seen as suspicious or a threat, so traveling with others is a good rule of thumb.

  • Clearly identify yourself by wearing company-labeled Personal Protective Equipment (PPE) and providing clearly identifiable signage in your vehicle so that people know why you're there.
  • Take the time to introduce yourself to neighboring businesses and/or individuals, letting them know that you are conducting an investigation in the area. This is also a good time to ask if they have any information, photos, or videos on the fire.
  • Be aware of your surroundings and immediate environment.
  • Just as you should when entering an unfamiliar building or structure, determine the entry and exit points.

Remember the job you’ve been hired to do. These can be stressful situations, but your ability to remain focused on the job at hand will enable you to conduct your investigation safely and effectively.

 

Up in Smoke: The Dangers of Cannabis and Hash Oil Operation Fires

Source: Andrew Bennett, Assistant Technical Director, Fire & Explosion, and Paul Mullin, Principal Engineer

Cannabis cultivation operations have been around for years but with the recent legalization in various U.S. states, the number of regulations and standards on legal operations has increased. Following this legalization, the National Fire Protection Agency (NFPA) created NFPA 420: Standard on Fire Protection of Cannabis Growing and Processing Facilities. This, along with other standards, has helped guide the safety for facilities that produce, process, and extract cannabis, but the large number of illegal operations which do not adhere to these standards are rising. The ever-growing types of THC extraction have become an additional hazard to contend with and the methods continue to evolve.

There are numerous hazards investigators and engineers face when dealing with a cannabis cultivation operation, which is why proper personal protective equipment (PPE) is necessary for those on-site. Each scene is a little different but typically a filtered mask, hard hat, nitrile gloves, eye protection, and a coverall suit with shoe coverings are required. Mold is an ever-present hazard in these environments due to moisture and the typical ineffective regulation of it in these operations. An often-forgotten deadly hazard is electricity and chemicals, as some operations use chemicals in their processing either as a true part of it or as a test to change concentrations or psychological effects. When on-site, one should assume if something is a liquid or powder, that it is an unknown chemical, even if it is labeled, to ensure safety.

Due to the necessity for lighting, fans, dehumidifiers, and other electrical devices, indoor operations normally consume a lot of electricity. To get around the high utility bills and possible red flags to local agencies, many of these operations will bypass the electrical service meter or use power from adjacent structures. Even if the service meter has been removed, it is likely that there is still power connected to the structure, and it is important to take precautions until it is proven otherwise. Many times, distribution devices, such as power strips and extension cords, are of poor quality and have a higher possibility of being overloaded, which poses a serious fire and safety risk.

In this article, we will discuss some of the typical problems which are found in relation to the cause of fires in cannabis operations.

Electrical Circuits

Since cannabis cultivation operations are becoming more prevalent with added legislation, licensing and code requirements are still evolving. We are still faced with numerous hazards of the unlicensed or illegal operations with a primary problem of the electrical aspects. The electrical work performed, in the illegal operations, are not usually up to code or what would be considered safe practice. Even in licensed operations, electrical work may be performed by unlicensed individuals when the operation is trying to save in costs.

As mentioned above, when entering one of these operations, one should always assume power is still connected until proven otherwise. Electrical service bypasses are typically located above the main service meter for the structure, and due to the clandestine nature of the operation, this is typically only visible from the interior of the structure.

Electrical service can also be used from an adjacent structure, with the wiring buried or hidden by other methods. In multi-tenant buildings, power is sometimes scavenged from adjacent panels or circuit breakers. Many of these connections or splices are with items available at that time, such as welded, nuts and bolts, or twisted onto the conductor with no covering. Poor and improper connections can cause overloading, resistive heating, and arcing, which can then cause ignition to surrounding combustibles.

Due to the heavy electrical draw on the system, primarily to lighting and climate control, the existing circuits are often overloaded, causing wiring and connections to degrade, and tripping the circuit breakers or fuses. A routinely tripping circuit breaker oftentimes leads the operator to simply replace the circuit breaker with one with a higher amperage rating, which does nothing but create a bigger problem of overcurrent on a circuit.

Powered Equipment

Legitimate Underwriters Laboratory (UL) rated cannabis cultivation equipment is readily available from quality manufacturers. Though, due to the expensive nature of these products, cheaper manufactures and products are typically used. As the saying goes, “you get what you pay for,” and this is normally the case for cannabis cultivation equipment. The lack of quality control, standards, and testing can lead to a vast array of malfunctions. Poor contact on connecters is often seen, which leads to resistive heating and arcing. Corrosion can also occur with the same ignition cause because of the humid environment or exposure to chemicals. Additionally, with the high probability of errant electrical activity and poor connections, resistive heating, arcing, and melting can occur.

Lighting

When it comes to indoor operations, lighting is essential. More light means bigger yields for growers. There are three major types of lighting, fluorescent, high-intensity discharge (HID), and light-emitting diode (LED).

Fluorescent

The two types of fluorescent lights used in cannabis growing include compact fluorescent lights (CFL) and T5 grow lights. CFL grow lights are the small twisty-looking bulbs you can find anywhere you normally buy light bulbs. They can be used in tiny spaces where no other grow light would fit, such as the inside of a cabinet. With a small space, lack of spacing or ventilation can cause heat buildup and the degrading of components, leading to errant electrical activity. T5 grow lights are one of the most easily available types of grow lights and are used to grow many different types of plants. These can be found in many garden and home improvement stores.

High-Intensity Discharge (HID)

HID lights used in cannabis growing include metal halide (MH), high-pressure sodium (HPS), ceramic metal halide (CMH), and light-emitting ceramic (LEC) grow lights. HID bulbs get really hot and generate a lot of heat. The surface temperature of the bulb can range from 500 to 1000°F. Due to the concentrated heat production, the bulb should be placed in a hood with some sort of cooling to prevent or dissipate heat. This is especially important for the bigger lights with power above 250W. As a result of this immense heat, most growers use an exhaust fan with ducting to vent out heat, which adds additional equipment that can fail.

Light Emitting Diode (LED)

LED lighting is the newer trend in the cost-saving aspect of these operations. Although LED lamps usually run a lot cooler than a similar wattage HPS bulb, they still produce heat, and the bigger sizes, like 300W+, may need to be vented with an exhaust fan to prevent overheating. Water-cooled LED fixtures are becoming popular as well to deal with the amount of heat normally produced during operation.

Pressure Vessels

As we learned in high school biology, plants consume CO2 to produce energy through photosynthesis. Cannabis growers supply CO2 gas into grow areas to significantly increase flower output of the plants; larger flowers equals higher profit from each plant. Over-pressurization of piping from high-pressure CO2 tanks, inside or outside the building, can be a significant hazard. Without proper safety pressure relief valves installed, gas piping can become a grenade if an accidental overpressure condition occurs.

Criminal Revenge

Due to the illegal aspect of these operations, criminal intent is a real factor in the cause of a fire. Competition between nearby operations, money owed, and theft of equipment or product are often motives. More often than not, the suspects and individuals involved in the cultivation are never seen again.

Hash Oil Extractions and Their Hazards

The butane honey oil or butane hash oil (BHO) method to extract oil uses butane gas to break off and dissolve the trichomes into the solvent and carry it away from the plant material. The butane is dispensed as a liquid but quickly turns into a heavier than air gas that accumulates in low-lying areas. This condition creates a highly flammable dangerous environment. Ignition sources can be similar to that of a typical natural gas leak in a structure, such as pilot lights, candles, electrical items, or the lighting of cigarettes.

The BHO method provides for a higher yield than other methods and the product can be consumed in vape pens, candies, waxes, and other forms that are increasingly popular. State-licensed producers of hash oil utilize sophisticated systems that can cost hundreds of thousands of dollars. Those wanting to make hash oil at home don’t have to spend nearly as much but results in the lack of safety procedures and risk to themselves and those in adjacent living structures.

Conclusion

As the legalization of cannabis continues throughout the country, the existence of fire and safety hazards will continue to be present. Those having to deal with the investigation and claims must stay up to date with the trends of cannabis growing and learn how to safely mitigate the potential remaining dangers. Effective evaluation of potential equipment failures in cannabis operations is extremely important, as the dangers can be serious. Knowing the proper precautions and having trained professionals and experts to advise and guide through the dangers and potential problems is critical.

 

 

FMCSA Publishes Draft Of Medical Examiner’s Handbook For Proposed Regulatory Guidance On Driver Physical Qualifications

August 18, 2022 • Source: Joe Pappalardo, Gallagher Sharp LLP

On August 16, 2022, the Federal Motor Carrier Safety Administration (FMCSA) published a 122-page draft of the new Medical Examiner’s Handbook that could become a guide for certified medical examiners who determine whether a driver meets the physical qualifications for commercial driving. The Agency also proposed changes to the Medical Advisory Criteria now published in the Code of Federal Regulations, 49 CFR part 391, Appendix A.

Under the current regulations, medical examiners make physical qualification determinations of drivers on a case-by-case basis and may refer to the related Medical Advisory Criteria for guidance. See 49 CFR 391.41 through 391.49.

According to the FMCSA, the handbook would provide medical examiners clearer information on specific regulatory requirements relative to a driver’s physical qualifications and offer further guidance to medical providers when making such determinations.

The current draft of the handbook, however, also offers recommendations on identifying drivers at risk for moderate-to-severe obstructive sleep apnea, a condition currently not required for testing by medical examiners under the Federal Motor Carrier Safety Regulations (FMCSRs). Should the handbook be formally adopted by the Agency, this would pose the question as to whether the handbook becomes a catalyst for future regulatory requirements, such as screening commercial drivers for obstructive sleep apnea, under the FMCSRs.

The debate about regulatory requirements for CMV drivers with obstructive sleep apnea is nothing new.  In March 2016, the FMCSA published an Advance Notice of Proposed Rulemaking (ANPRM) that would have required CMV drivers who exhibit multiple risk factors for obstructive sleep apnea to undergo evaluations and treatment by a healthcare professional with expertise on sleep disorders.  However, this ANPRM was withdrawn in 2017.

FMCSA is now accepting public comments about the proposed regulatory guidance on or before September 30, 2022.

 

When Artificial Intelligence and the Internet of Things Collide

August 8, 2022 • Source: Elizabeth Fitch & Melissa Lin, Righi Fitch Law Group

Introduction:

As technology continues to rapidly advance, the use of artificial intelligence (“AI”) and internet of things (“IoT”) devices have become a more integral part of our daily lives.  With the pressure to get new technology products and programs into the market quickly. The exponential increase in use of these technologies comes with increase in the liability risks to consumers, programmers, and manufacturers alike. Numerous poorly developed AI have shown patterns of discrimination and proposed flawed or even life-threatening solutions to problems.  IoT devices, have continued to injure and kill people when they malfunction while also being a major cybersecurity risk.  With the rapid adoption of these technologies, the legal and insurance industries have struggled to deal with the new and unanticipated risks that accompany them.  This article addresses the emerging risks associated with unchecked and unexpected consequences of artificial intelligence and IOT going rogue, and how claims professionals can best prepare to handle these claims.

II.        Emerging Sources of Liability

a)      Artificial Intelligence

Both businesses and individuals have become increasingly reliant upon AI and current trends show that we are only going to become more dependent as time goes on.  In the McKinsey Global survey conducted in 2021, 56% of businesses reported the adoption of AI to perform operation functions.  Businesses are not the only ones employing AI, individuals and governments have both realized the potential AI presents.  In 2021, private investment in AI totaled $93.5 billion and more than doubled the private investments made by individuals in 2020.  Additionally, federal lawmakers in America have started to notice the impact AI can have. While, in 2015, there was only one bill that proposed regulations involving AI, 130 were proposed in 2021. This increase of attention shows that more people are becoming aware of the prospects AI has to offer as well as the threats that AI can pose if not properly controlled.  However, as the prevalence of AI continues to increase in our society, so too will the risk of liability caused by AI malfunctioning.

Microsoft’s “Tay” chatbot offers a cautionary tale of an AI system gone rogue.   Tay “learned” from Twitter users to make extremist and bigoted statements.  Amazon scrapped a machine-learning tool for selecting the top candidate resumes because it discriminated against women.  The most recent artificial intelligence gaffe was Google’s facial recognition app labelled a black couple as “gorillas”. While these incidents caused embarrassment and outrage over the conclusions these faulty AI produced, these damages are still of the lower end of the spectrum of the harm AI can produce.

Other instances of faulty AI have had the potential to ruin individuals’ lives or cause serious bodily harm or death.  For example, many law enforcement agencies now use AI to help them find criminals.  However, such AI often misidentifies people as criminals and can lead to false arrests and put them in danger.  Amazon’s Recognition AI was one of these faulty AI that misidentifies twenty-seven professional athletes as criminals during a test run by the ACLU that compared pictures of people with criminal mugshots.  Fortunately, this was only a test, so the AI was not actually being used to arrest people.  However there have been cases where AI was used to arrest innocent people.  In June 2020, police in Detroit relied on a facial recognition AI to make the arrest of Robert Julian-Borchak Williams for felony larceny. The program compared a blurry picture from surveillance video of the real perpetrator shoplifting $3,800 worth of timepieces.  As a result, Williams was arrested in front of his wife and children and wrongfully detained.  Although Williams was eventually able to prove his innocence, this incident shows the potential for people to be endangered by the mistakes of AI.

Errors made by AI also could have direct lethal consequences, as shown by IBM’s AI, Watson.  In 2017, Watson was used to recommend treatments for cancer patients.  In one case involving a 65-year-old lung cancer patient that developed severe bleeding, Watson recommended giving the patient a drug that could cause a fatal hemorrhage.  Fortunately, the doctor supervising the AI knew better and refused to give the patient the medication.  However, as AI becomes more prevalent in our society there will surely be cases where professionals will incorrectly default to an AI’s recommendations and severely injure others.

For these reasons, it is clear that AI left to run unchecked can represent major and even existential risks to insureds.  Accordingly, claims professionals will need to prepare for the liability threats that AI reliance may cause their clients.

b)     Internet of Things

IoT is used to describe devices that have sensors, software, and other technologies that connect and exchange data with other devices over the internet.  While this connectivity has led to many improvements to our quality of life, there are also inherent risks that the IoT presents.  By 2025, 75 billion active IOT devices will be connecting to the Internet.  Accordingly, claims professionals can expect to see claims ranging from property damage and business interruption due to threat actors taking down the grid to wrongful death and catastrophic injury claims.

i)       Dangers of Autonomous Vehicles

When IoT devices malfunction, they have the potential to wreak havoc and create unexpected liability exposures.  Cautionary tales include everything from rogue crop dusting IOT devices destroying crops to semi-automated vehicle failures resulting in serious accidents. In the last couple of years alone, people have been involved in numerous crashes because of their reliance on the self-driving capabilities of newer vehicles.  The Tesla autopilot feature, for example, has become the poster child of this issue.  Ever since Tesla sold cars with their autopilot feature, people have been getting into crashes as they metaphorically and literally fell asleep at the wheel. Fortunately, most crashes have been relatively minor up to this point, but there have been more severe accidents that resulted in serious injury or even death. With an estimated 765,000 Tesla vehicles in the United States that have similar autopilot functions and even more vehicles with a similar feature, claims professionals should be prepared to see an increase in crashes involving such malfunctions.

Similarly, Uber’s self-driving car has not been without controversy. In fact, one of Uber’s test drivers for their fully autonomous vehicles is now facing negligent homicide charges for allowing the car to hit and kill a pedestrian who was walking her bike across the road. Investigations revealed that the test driver who was supposed to be watching the road was streaming a television show instead. Although Uber was not held criminally liable in this situation, they received heavy criticism that caused them to halt their autonomous vehicle testing. However, this event did not deter them for good. In 2020, Uber’s self-driving cars were allowed to continue testing in California, along with 65 other transport firms, and have shown a commitment to improving and implementing their autonomous cars. As such, claims professionals should prepare to see more accidents caused by distracted drivers.

ii)     IOT Manufacturing Malfunctions

IOT manufacturing malfunctions are becoming an all too frequent occurrence. Robotics in manufacturing plants have also proven to be deadly when they malfunction. There have been many instances where workers have been crushed and bludgeoned by robotics in these facilities when these robots go haywire.  Often these accidents are caused by the robots’ strict adherence to their coding, so when a robot encounters a novel situation, it often does not know how to respond.    In 2016, a study found 144 deaths, 1,391 patient injuries, and 8061 device malfunctions related to robotic surgical machines from 2000 to 2013. The reported list of malfunctions during surgeries included uncontrolled movements of the robot, spontaneously switching on and off, loss of video feed, system error codes, and electrical sparks causing burns. Overall, the study found 550 adverse surgical events per 100,000 surgeries.

There is no denying that AI and IoT devices will become more and more integrated into our daily lives as time goes on.  However, as that happens, new and unanticipated risks will begin to emerge alongside them.  Therefore, it has become abundantly clear that claims professionals will have to adapt to the changing times to understand how to best handle the damages that result from the technological advancement of society.

III.       Legal Response to AI and IoT

As AI and IoT are relatively new fields of technology that have only been widely commercially available for the last few years, there is not much established law regarding liabilities caused by them. Accordingly, there is no established consensus regarding how damages caused by these technologies should be handled. However, that has not stopped courts and legal scholars from developing their own legal theories for the allocation of liability stemming from AI and IoT.

c)      Legal Theories for AI Liability

AI presents particularly unique legal liability challenges because, in theory, the software program that was sold to the user will not be the same program that caused the liability. This is because the machine learning capabilities of AI necessarily results in the software rewriting its own code to evolve with the data that it is receiving and become more efficient or accurate in performing its task. As such, the legal community is debating who should be held liable for damages caused by AI.  Is the consumer that used the AI liable because the defect manifested itself while under his control or should the manufacturers and programmers have predicted the defect and prevented it from manifesting preemptively?  If the suppliers are to blame, which link in the chain of production is at fault for which percentage of damages?  Can an AI be treated as a legal entity — like corporations are — and be held directly responsible for damages it causes?  To answer these questions, several legal theories have been proposed to allocate liability for damages caused by AI.

One legal theory proposed is to use the doctrine of Respondeat Superior for AI liability. Respondeat Superior, also known as the “Master-Servant Rule,” states that a principal should be liable for the actions of an agent who was negligent while working within his scope of employment.  A similar theory is to treat AI as if it was an ordinary tool.  Just as a person would be liable for negligently operating heavy machinery, an AI consumer would be liable for negligently implementing AI.  Under this theory, a person who buys or uses the AI will be liable for damages caused by the AI while acting under the person’s control. This creates a principal-agent like relationship between the consumer and the AI so that actions taken by the AI could be imparted upon the consumer. Therefore, the consumer would not be able to evade liability for damages caused by the AI simply because he did not intend for the AI to act in the way it did.  Under this theory, the AI user could hold the AI developer liable for damages only if he can prove the AI was defective while under the developer’s control and that defect was the proximate cause of the damages.

Alternatively, some believe that AI should be treated as a legal entity, similar to corporations. The argument for this position is that AI are capable of rational thought and independent decisions, so they should be held liable for damages they cause.  The benefit of this approach is that it becomes easy to identify the liable party because the AI itself will be directly suable when it malfunctions instead of the user or creators who could not have anticipate the failure.

Perhaps the most likely approach that the courts will take for AI liability is to adapt the current laws for product liability to AI.  This is because product liability law has historically had to evolve alongside emerging technologies in the past, so it is likely to evolve with the emergence of AI to address novel issues.  If that is the case, an AI user could be found liable for the damages caused by an AI if he used the AI in a negligent manner to cause the damages.  However, a user would not be liable for damages if he used the AI in a reasonably foreseeable way that inadvertently causes the AI to develop a defect.  In such a case, software companies that developed the AI would likely be found liable under product liability law for failing to anticipate how an AI could develop defects through reasonably foreseeable interactions with humans. Courts would likely perform a risk-utility test to determine if the safety precautions could have been taken to decrease risk of AI malfunctions without lowering the AI’s utility or unnecessarily increasing the cost of producing the AI.

d)     Liability for IoT Damages

IoT claims landscape is equally complex.  As the cybersecurity risks of IoT becomes more prominent with the rise of IoT, courts will also likely be more willing to find companies that produce IoT liable for products that lack adequate security measures.  Product liability arises when a product was in a defective condition while under a producer’s control that made it unreasonably dangerous and was the proximate cause of a plaintiff’s damages.  PeoIoT producers may not be the only ones held liable for IoT cybersecurity issues; IoT users will likely also face liability.

Most companies today use IoT in some capacity for their day-to-day operation. While conducting business, they will inevitably collect sensitive customer data that they have a duty to protect. The problem is many IoT devices that are vulnerable to hacking are not monitored and their software is never updated. Unsurprisingly, these devices often get hacked and act as backdoors for hackers to gain more access to sensitive customer information.  As these cyberattacks become more frequent, courts will likely start holding companies at a higher standard of care to take proper precautions in ensuring all IoT devices connected to a network are secure. Therefore, negligence claims against companies that have substandard IoT cybersecurity will likely increase in the years to come.

IV.       Claims Professional Response to AI and IoT Liability

With an increase in AI products, claims professionals will need to make a fundamental shift in the processing and evaluation of claims.   These claims will require far more technological sophistication.  The claims handler will be well served by developing a deep understanding of technology and approaching A/I and IOT claims like complex have to prepare product liability claims as opposed to simple negligence cases.  This is because any accident that involved the product could have been caused by its AI.  Claims professionals will have to be prepared to follow the chain of production for any AI sold to determine which point of the manufacturing process may have been responsible for the damages. Therefore, it will be crucial for claims professionals to find experts for various types of AI to analyze claims and determine if the AI malfunctioned and who is to blame if it did.  Additionally, claims professionals that cover producers of AI products will need to adjust their rates based on how predictable the AI’s behavior is and the products potential to cause damages if the AI malfunctions.

The evolution of technology necessarily results in the evolution of insurance products.  New insurance products are already being developed to respond to the risks associated with artificial intelligence and IOT devices. Claims professionals will need to keep abreast of the insurance product iterations to conduct a proper coverage analysis at the outset of the claims handling process. 

Like with AI products, claims professionals will also need to gather new resources and experts to evaluate the unique dangers IoT devices present.  Claims professionals will not only need to be able to tell if an IoT device’s programming was the cause of damages in a claim, but also if a lack of cybersecurity caused the damages.  Furthermore, because any company could be liable for a cybersecurity breach, claims professionals will need to evaluate the cybersecurity measures companies are taking for IoT devices connected to their network to determine risk and evaluate claims.  

V.        Conclusion

Evaluating accidents involving IOT devices and artificial intelligence is unique and requires an understanding of how IOT and AI contributed to accidents.  Ongoing education of all in the risk industry from claims professionals to lawyers on technology developments and the legal liabilities associated with IOT failures and artificial intelligence unintended consequences will be critical to managing risk. 

Thoughts: I enjoyed this read, it gave a great overview of the scary parts of technology. Would it be possible to put in some good that tech is doing to keep drivers safe and process claims more efficiently? Don’t want to paint a complete doom and gloom scenario (unless that is what is intended). 

 

Colorado Court Grants Harris, Karstaedt, Jamison & Powers, P.C.'s Motion for Dismissal

July 28, 2022  Source: Jamey JamisonHarris, Karstaedt, Jamison & Powers, P.C.

Attorneys from Harris, Karstaedt, Jamison & Powers, P.C.'s Motion for Dismissal filed a motion for dismissal in the Colorado Court of Appeals Case, Fierst v. Greenwood Athletics. In this case, the plaintiff signed a membership agreement with the client, an athletic club, which contained an exculpatory clause. The courts found that the agreement was clear and unambiguous. Read the full opinion

 
<< first < Prev 11 12 13 14 15 16 17 18 19 20 Next > last >>

Page 13 of 32