Latest News

Man’s Best Liability: A Brief Overview of Homeowner’s Insurance and Dog Bites*

August 2023 • Source: Michael "Maz" Mazurczak, Melick & Porter, LLP

The issue of insurance coverage in the event of a dog bite is becoming a more and more common problem as time goes by. The American Veterinary Medical Association has found that there are nearly 85 million dogs living in US households and according to the Insurance Information Institute dog bites make up approximately one-third of homeowners insurance claims and these claims have increased by ninety percent over the last fifteen years with a 2.2 percent jump from 2020 to 2021 alone. This makes sense considering the 4.5 million people are bitten by dogs each year in the United States. And these injuries can carry hefty financial consequences, as of 2021 the average award for dog bite cases (taking into consideration both settlements and trial verdicts) was $49,025.

Massachusetts has strict liability for dog bites, regardless of the breed of dog and whether or not the dog has a history of aggression. This means a plaintiff does not have to prove the person responsible for the dog was negligent in order to recover damages. Taking measures such as a leash, harness, or fence are also not enough to avoid liability. It is also key to note that the law applies to the person responsible for the dog. Case law is clear that if someone is housing and caring for another’s dog they can still be found liable a bite that happens while the dog is in their care. The only exception to this strict liability is if the plaintiff provoked the dog, was trespassing on the dog owner’s property, or was threatening the owner or another individual. Damages have even been awarded in cases where there wasn’t serious physical injury, PTSD and mental anguish suffered by the plaintiff following a dog attack can be enough.

Homeowners insurance will provide coverage for dog bites that occur in the home provided that the policyholder informed them in advance of each dog being adopted. If the insurance was not previously aware of the dog or the attack occurs off the owner’s property insurance may deny coverage. While it is illegal for local laws to prohibit owning certain dog breeds, insurance companies can deny coverage based on a dog’s breed and often do. The most commonly denied dogs are pitbulls, presa canarios, rottweilers, german shepherds, and akitas. Dogs will a history of aggressive behavior will also almost always be denied. Some insurance companies won’t cancel the policy altogether but will require the dog owners to sign liability waivers for dog bites, acknowledging that their homeowner’s insurance is not extending coverahe to the dog. Separate insurance that specifically provides coverage for dog attack and property damage caused by a dog can also be purchased but often homeowner’s insurance will still cancel a policy if the policyholder adopts what their guidelines deem a dangerous dog.

*Melick & Porter, LLP loves dogs and would never want to discourage anyone from adopting a furry friend. This is simply a guide to know how to best protect yourself and your pup.

 

US Supreme Court Ruling to Make Philadelphia the Center of Personal Injury Litigation

August 2023 • Source: Ross J. Di Bono., Zarwin, Baum, DeVito, Kaplan Schaer Toddy, PC

On June 27, 2023, the United States Supreme Court issued a ruling in Malloy v. Norfolk Southern Railway Co. (600 U.S. ___ (2023))[1], that upheld a Pennsylvania Statute granting Pennsylvania State Courts general jurisdiction over all foreign corporations that register to do business in Pennsylvania.  This ruling has the potential to drastically increase the number of cases that are filed in Pennsylvania, with a particular emphasis on personal injury cases. 

The Mallory case involved a claim for damages due to illnesses that were allegedly caused by exposure to carcinogens.  Plainitff was a longtime Norfolk Southern freight mechanic and worked for the company in Ohio and Virginia.  While Plainitff lived in Pennsylvania for a period of time, he does not allege that he sustained any exposure to the cancer-causing chemicals while in Pennsylvania. 

Ultimately, Malloy retained a Pennsylvania personal injury attorney and filed suit in Pennsylvania state courts for his alleged injuries.  He asserted that the Pennsylvania state courts had personal jurisdiction over Norfolk Southern pursuant to a Pennsylvania Statute that governed registration of foreign corporations doing business in Pennsylvania.  Pursuant to 42 Pa. Cons. Stat. §5301(a)(2), as a requirement of registering to do business in Pennsylvania, a foreign corporation must consent to Pennsylvania state courts exercising general jurisdiction over the foreign corporations.  This means that by way of registering to do business in Pennsylvania, a foreign corporation is consenting to be sued in Pennsylvania even though the cause of action did not arise in that state. 

Norfolk Southern opposed this lawsuit and argued that the consent to general jurisdiction set forth in 42 Pa. Cons. Stat. §5301(a)(2) violated its due process rights under the 14th Amendment of the US Constitution.  This issue proceeded before the Pennsylvania Supreme Court which sided with Norfolk Southern and ruled that the consent to general jurisdiction requirement violated Due Process under the 14th Amendment. 

Malloy appealed this ruling to the US Supreme Court.  In a divided, 5 to 4 opinion, Justice Gorsuch overruled the Pennsylvania Supreme Court and held that the consent to jurisdiction requirement set forth in 42 Pa. Cons. Stat. §5301(a)(2) did not violate the 14th Amendment.  Justice Gorsuch noted that this issue was not a new issue and had been decided by the Supreme Court in 1917 in the Pennsylvania Fire Ins. Co. of Philadelphia v. Gold Issue Mining & Milling Co., 243 U.S. 93. 

Per Justice Gorsuch, Pennsylvania Fire specifically ruled that similar consent to general jurisdiction clauses did not violate the 14th Amendment’s Due Process Clause.  Thus, Pennsylvania Fire was binding precedent.  Justice Gorsuch specifically rejected Norfolk Southern’ s argument that the Pennsylvania Fire ruling had been chipped away by subsequent Supreme Court decisions.  Thus, and by a 5 to 4 margin, the Court vacated the ruling of the Pennsylvania Supreme Court and remanded Malloy for further proceedings. 

While Justice Gorsuch was clear that the consent to general jurisdiction clause does not violate Due Process, Malloy may be overturned on other grounds.  In a concurring decision, Justice Alito noted that, while he agrees that the consent to general jurisdiction clause does not violate the 14th Amendment’s Due Process Clause, he believes it may violate the Dormant Commerce Clause.  Per Justice Alito, Norfolk Southern raised the Commerce Clause issue with the Pennsylvania Supreme Court, but that court did not rule on this specific issue as it had reached a dispositive conclusion on the Due Process argument.  Pursuant to Justice Gorsuch’s ruling, Malloy is now being remanded back to the Pennsylvania Supreme Court, which is expected to rule on the Dormant Commerce Clause issue.

Given that the Pennsylvania Supreme Court already found the consent to general jurisdiction clause unconstitutional, there is a chance that the court once again rejects this requirement.  As Just Alito noted, under the Commerce Clause, the Constitution grants Congress the power to regulate commerce among the States.  The Dormant Commerce Clause prohibits States from enacting laws that unduly restrict interstate commerce.    Justice Alito also noted that the Constitution also restricts a state’s power to reach out and regulate conduct that has little, if any, connection to the State’s legitimate interest.

Based on his concurring concurrence, Justice Alito has left to door open for Malloy to be revaluated under the Dormant Commerce Clause as he noted that the consent to general jurisdiction clause allows Pennsylvania State Court to govern over issues that have little to no connection to the state, and therefore interferes with interstate commerce.  As Malloy was decided by a slim margin, there are signs that should the case get back in front of the Supreme Court on the Dormant Commerce Clause argument, the vote may proceed differently. 

Practically, this decision opens the door for Pennsylvania to see an influx of personal injury lawsuits, with most of them to be filed in Philadelphia.  As Justice Alito noted, Philadelphia is especially favorable to tort plaintiffs.  Since forum shopping is not prohibited by the Constitution, it is anticipated that most of these new cases will be filed in Philadelphia, even those they have no tangible connection to the City or State.

Because of this, there will be a renewed focus on Motions challenging the propriety of venue in Philadelphia, as well as removal proceedings.  In Pennsylvania, venue is proper in any county where a corporation “regularly conducts business.” Pa. R. C. P. 2179 (a)(2).  When a corporation challenges venue solely based on its business contacts, the courts conduct a so-called the quality-quantity analysis. Substantively, venue is proper under Rule 2179(a)(2) where there is: (1) a “quality of acts” conducted by the corporation that directly further or are essential to corporate objectives; and (2) a “quantity of acts” that are “sufficiently continuous so as to be considered habitual.”

Recently, the Pennsylvania Supreme Court heard oral argument regarding the “quantity” prong of the venue analysis. Hangey v. Husqvarna Prof'l Prods[2]. There, the Plaintiff purchased a lawnmower from a retailer in Bucks County. PA.  After falling off the lawnmower while in use, the Plaintiff sustained severe injuries to both legs when the lawnmower continued to operate and ran over his legs. Plaintiff, a resident of Wayne, Pa, filed a products liability case against the manufacturer and retailer in Philadelphia County. The trial court issued an order transferring this case Bucks County based on the defendants’ preliminary objections for improper venue. The trial court reasoned that although .005% of sales from a multi-billion-dollar company satisfied the “quality prong,” the tiny percentage failed to meet the “quantity” standard.

The Superior court disagreed, and overturned the trial court, stating it abused its discretion by focusing on the percentage of business when ruling the contacts did not satisfy the quantity prong of the venue analysis. Because the Defendants are vast, billion-dollar entities with at least one authorized dealer in Philadelphia, the court held the trial court abused its discretion.

Accordingly, venue is going to be one of the most “hot button” issues before the Pennsylvania Supreme Court this coming year.  Not only will the court be addressing the consent to general jurisdiction clause under the auspice of the Dorman Commerce Clause, but it will have the chance to set a definitive rule by which to measure a corporation’s quality of contacts with a given county.  

These rulings have the potential to make Pennsylvania, and Philadelphia in particular, a major focus of personal injury litigation throughout the country.  These decisions will also place a renewed focus on the challenging venue, either by Preliminary Objection or removal.  Since a challenge to venue must occur within twenty days of service, and removal within thirty days of service, foreign corporation, registered to do business in Pennsylvania, must be judicious in reporting new lawsuits and obtaining defense counsel to make sure that all proper precautions are taken to ensure that case are not tried in an improper venue. 


[1] Malloy v. Norfolk Southern was not directly handled by Zarwin Baum attorneys. 

[2] Hangey v. Husqvarna Prof'l Prods was not directly handled by Zarwin Baum attorneys.  

 

Large, Well-known Massachusetts Franchisees Face Recent Labor Violations

August 2023 • Source: Michael "Maz" Mazurczak, Melick & Porter, LLP

Over the past three months, two national brands have paid large settlements to workers for labor violations at various Massachusetts locations. First, two Dunkin’ Donuts franchisees, located in central and southeastern Massachusetts, were fined a collective $370,000 for child labor violations. After a complaint to the Attorney General’s Office alleging that the locations were in violation of M.G.L c. 149, investigators uncovered over one thousand instances of violations of child labor statutes.

Between the two locations, the violations included employing minors after 8:00 P.M. without adult supervision, employment of 16- or 17-year-olds for more than nine hours a day, employment of minors earlier than 6:00 A.M., and failing to obtain valid work permits. Since January of 2022, the AG’s Office has issued 32 citations against various Dunkin’ franchisees, the majority being related to child labor violations. In sum, the violations have totaled over $560,000.

Similarly, in June, Dave & Buster’s was cited for breaking similar Massachusetts labor laws surrounding meal breaks and child labor, leading to it paying $275,000 in settlement to over 800 aggrieved employees. The violations came following a parent’s complaint to the AG’s office after their child was denied meal breaks and forced to work past midnight on a weekend.

What does this mean? With such well-known brands making news headlines and paying large settlement amounts over a few months span, it would not be surprising to see parents of young workers continue to file complaints with the AG’s Office and look to “cash-in.”  Employers must remain aware that, in Massachusetts, employees are required to receive a 30-minute meal break (where they can leave the premises) if his/her shift is longer than six hours. Further, on school nights, 16- and 17-year-olds may not work past 10 P.M. Employers must remain vigilant of these, along with the rest of Massachusetts child labor laws, which can be found summarized at the link below:

https://www.mass.gov/service-details/massachusetts-laws-regulating-minors-work-hours
 

Reshaping the Human Experience and Exploiting the Human Condition: The Disturbing Reality and Risk of Unregulated Technologic Developments

August 2023 • Source: Elizabeth S. Fitch, Melissa Lin, and Kyle James, Righi Fitch Law Group

The most disturbing reality with emerging disruptive technologies is the absence of ethical and regulatory oversight.  A Google whistleblower is claiming that Google built a machine that has human consciousness.  Google immediately fired him and issued a press release that Artificial Intelligence (“A.I.”) is nowhere close to human consciousness.  But how would the average human know?  We don’t!  Another disturbing technologic development is Deep Fake technology.  Deep Fake software developers are hard pressed to articulate why this technology is helpful to humanity, yet forge ahead at light speed to get their products into the market.  While the value of vehicle telematics to cell phone data tracking to reduce risk is well documented, there is still very little oversight and thought about the downsides and misuse of these technologies.  It is critical that insurers and attorneys understand the risks presented by these emerging and disruptive technologies so that claims professionals and defense lawyers can begin to build strategies and initiatives to handle unique claims from the implementation of these technologies. 

I. Artificial Intelligence 

"A.I. is probably the most important thing humanity has ever worked on.  I think of it as something more profound than electricity or fire." – Sundar Pichai 

Both businesses and individuals have become increasingly reliant upon A.I. and current trends show that we are only going to become more dependent as time goes on.  In the McKinsey Global survey conducted in 2021, 56% of businesses reported the adoption of A.I. to perform operation functions.  As the current trend toward A.I. increases, it is important to regulate these machines or negative consequences may follow what Pichai believes is “the most important thing humanity has ever worked on.”  

A.I. in the medical field could help doctors focus more on procedures instead of administrative tasks.  A GPT-3 based chatbot was created to aid doctors in dealing with patients.  Its design would help schedule appointments or talk with those struggling with mental health.  Unfortunately, the A.I. reportedly had multiple issues handling simple tasks such as determining the price of X-rays and had time-management problems while scheduling patients.  The A.I. also drew major concerns over its handling of mental health, telling a fake patient in an experiment to commit suicide during one of its tests.  The consequences of lack of oversight in this scenario could have caused a loss of business or even life.  When these machines are programmed they cannot discern whether such harms like telling a patient to kill themselves are good or bad; they simply do what they are programmed too.  This can cause additional harm when the biases belonging to society are replicated in A.I.

Automated systems designed to be impartial and eliminate human bias can at times magnify the biases people have instead of mitigating them.  An example can be seen in the criminal justice system.  As trends toward A.I. reliance increase, the harm of potential automated decision making in the criminal justice system grows.  Predpol, a software developed by the Los Angeles Police Department, was designed to predict when, where, and how crime would occur.  This approach wound up disproportionately targeting lower income and minority neighborhoods.  When a study was performed on the overall crime in the city, the study showed crime was much more evenly distributed than the A.I. indicated.  Sentencing can also be decided by A.I., assessing whether a defendant will recidivate.  Without the proper oversight, the criminal justice system may take the A.I. probability for recidivation as an impartial estimate free from bias.  A 2016 study by ProPublica determined one such A.I. was twice as likely to incorrectly label black prisoners as being at high risk of reoffending as white prisoners.  While the race of the prisoner was not directly considered by the A.I., the other variables that were considered clearly disfavored black Americans.  As a result, many black prisoners may be getting stricter jail sentences or higher bail because these A.I. are incorrectly labeling them. 

Such biases potentially held by A.I. can spread through other institutions.  An audit of a resume screening tool found that the two main factors that were strongly associated with a positive job performance were the name Jared and whether the applicant played lacrosse, two factors that are more prevalent in whites instead of nonwhites.  If this is combined with a belief that A.I. is impartial and there is no oversight looking for prejudicial factors such as these, affected job applicants can be left with no legal claim to address potential employment discrimination.  Issues like these don’t often reach wealthier hires for high paying roles; these potential employees are often looked at by other people.  Instead A.I. is more likely to review workers that apply for lower income jobs that may not have the resources to seek relief.  

A. ChatGPT*

ChatGPT is one such A.I. software, already being used across the country. The emergence of ChatGPT, a revolutionary language model developed by OpenAI and free for anyone on the planet to use, has brought both excitement and trepidation to the realm of artificial intelligence.  Just as with other disruptive technologies, the concerns about ethical and regulatory oversight are becoming increasingly pertinent within the domain of A.I. language models.

ChatGPT operates based on patterns and information present in the massive datasets it was trained on.  While its ability to generate human-like text is impressive, it also inherits the biases encoded within these datasets.  These biases can range from gender and racial biases to socio-economic and cultural prejudices.  Just as seen with A.I. systems in other sectors, ChatGPT's outputs may inadvertently amplify societal biases, exacerbating rather than alleviating them.  For instance, if prompted with text containing subtle biases, ChatGPT might unknowingly generate responses that reinforce those biases.  This can lead to harmful consequences when used in contexts such as providing customer support or generating content for various industries.  Imagine a scenario where ChatGPT is employed to assist in human resources, and its responses subtly favor certain genders or ethnicities during candidate evaluations, perpetuating discrimination in hiring processes.

ChatGPT's remarkable ability to generate coherent and contextually relevant text can sometimes blur the lines between factual accuracy and fabrication.  The model lacks a true understanding of the world; generating responses based on patterns it learned during training.  This becomes a significant concern when considering its use in sensitive sectors like healthcare or law.  Like the medical A.I. discussed earlier, ChatGPT could generate responses that, although coherent, might contain misinformation or potentially dangerous advice.  For example, if asked about medical symptoms and treatments, ChatGPT might produce information that, while plausible sounding, is incorrect and potentially harmful if followed.

One of the pressing challenges with ChatGPT and similar A.I. models is their potential to contribute to the spread of misinformation.  Given their ability to generate text that resembles human-authored content, these models can unwittingly contribute to the dissemination of false or misleading information.  In a world where fake news and misinformation are already significant concerns, the unregulated deployment of A.I. language models like ChatGPT adds another layer of complexity to the battle against false information.

While the capabilities of ChatGPT and similar A.I. language models are undeniably impressive, their unfettered use raises serious ethical and regulatory issues.  Just as with A.I. in other domains, there is a pressing need for oversight, transparency, and bias mitigation in the deployment of ChatGPT.  As A.I. continues to weave itself into the fabric of human society, it is imperative that we engage in thorough discussions and implement safeguards to ensure that these technologies contribute positively to our world without exacerbating existing challenges.

*Everything in this section was written by ChatGPT 3.5. We inputted the A.I. section of the article into ChatGPT’s textbox and prompted it to write the ChatGPT section following the style and content of the preceding information. As you can see, it “recognizes” the potential pitfalls of its widespread use and the clear need for oversight.  What is also apparent is its constant need to compliment itself.                           

B. Artificial Intelligence Oversight 

A.I. oversight is a necessity if manufacturers wish to mitigate some of the issues and biases of A.I.  In 2015, there was only one bill that proposed regulations involving A.I.  In 2021, there were 130.  This increase shows that more people are becoming aware of the prospects A.I. has to offer as well as the threats A.I. can pose if not properly controlled.  If the public perception surrounding A.I. is that they are perfect, issues with A.I. regarding bias can be brushed off as simply following a program created to be objective.  

A recent poll shows Americans fear that A.I. may come with other negative consequences that may potentially create new legal issues.  A national poll on behalf of Stevens Institute of Technology showed a loss of privacy is one of the leading issues surrounding A.I. with GenZers being the least concerned (62%) and Baby Boomers being the most concerned (80%).  Most respondents (72%) believe that countries or businesses may irresponsibly use A.I. and most of them (71%) also believe that individuals will use A.I. irresponsibly.  While this does show concern for the growth of A.I. technology, more than a quarter of respondents polled (37%) do not believe that A.I. will lead to gender bias.  These respondents could be discriminated against in ways they could not readily perceive and not have the resources to file a claim. 

The respondents show that while A.I. may be accepted in society, sufficient oversight is a necessity.  A possible solution is to employ more diverse A.I. teams to help produce the data sets.  Diversity can create data sets that are better representatives of society at large and can potentially mitigate some of the biases that A.I. technology may have.   

II. Deepfake Technology

One of the more feared A.I. technologies are deepfakes.  The inherent danger of deepfakes comes from the inclination of people to believe what they see and/or hear.  The term “deepfake” means a technology that involves a subset of machine learning techniques or “deep learning techniques.”  Deep learning is a subset of machine learning, which is a subset of A.I.  Here, a model uses training data for a specific task.  More data given to the task will allow for sharper and higher quality models.  This data can be used in replicating videos, images, audio, text, and potentially other forms of media.  In 2020, a deepfake programmer was able to produce realistic tracks by Jay-Z, Elvis, and Frank Sinatra by using old, released music.  

One of the more commonly known uses of deepfake technology is the face swap.  This is when the face of one is placed upon another.  The face placed on another seems to come to life matching the mannerisms of someone speaking or doing some other activity.  It can be seen in movies when the face of an actor is made to be younger and placed on screen on the body of another actor and has even been used to show deceased actors faces on screen for a scene.

Although the movie industry has benefitted, actors such as Kristin Bell, Scarlett Johansson and Gal Gadot have all fallen victim to a harmful usage of face swaps called deepfake pornography.  Deepfake pornography is when a face swap is used on an individual (commonly someone famous) and placed on pornographic content.  The idea behind this is to make the individual look as if they were engaging in the pornographic conduct themselves.  Face swaps can be accompanied by a deepfake technique called lip syncing.  Here, voice recordings from different videos are recorded to make a subject appear to say something.  This technology is accessible to most people and examples of deepfakes that fooled hundreds can be seen on apps such as Tik-Tok, YouTube, and Facebook.  

The accessibility of deepfake technology bring questions as to potentially what can be done to protect against its misuse as its issues become more pervasive.  In 2020-21 over 100,000 deepfake pornographies of women were created without their consent or knowledge.  In 2021 a deepfake called A.I. Dungeon generated sexually explicit content of children.  Such technology is rapidly improving and drawing concerns over how to counter or regulate it.  The possibilities of harm that may result from deep fake technology are numerous.  These harms include possible fraud, cyberbullying, spreading of misinformation, fake evidence in court proceedings, and even child predators masking their age when attempting to meet minors. 

A. Deepfake Oversight:

While deepfake technology is new, bills are being passed to help combat it.  Virginia currently has an amended revenge porn law that includes deepfake pornographic content of a nonconsenting individual.  Current laws can also be applied to deepfakes depending on how they are used.  For example, if one uses a deepfake for extortion or fraud they would be charged with those respective crimes.  As the quality of deepfakes improve our awareness of them does as well and companies have been suggested to improve awareness of these deepfakes to mitigate the harm that may potentially come.   

III. Telematics

A. Telematic dangerous propensities

Telematics can also be dangerous without proper oversight.  Telematics refers to the combination of telecommunications and information processing.  This brand of technology is responsible for tracking (GPS) and insurance assessing risk factors.  Insurance companies can oversee whether an individual is an accident risk and adjust insurance premiums accordingly.  Insurance companies may even be held liable if nothing was done in response to potentially risky driving behaviors.  Telematics have provided vehicles with benefits like the ability to disable a car when stolen, or the software to unlock doors and enact heated seats.  While the introduction of telematics may seem all well, without oversight the tracking feature in vehicles and cellphones can provide negative consequences.  

B. Telematic Oversight

Around 26 states have shown growing concerns over privacy violations that may be present in both vehicle and cellphone tracking.  While this has been a growing concern for citizens, multiple states are acting to place vehicular or cellular tracking under the statute of stalking.  Data tracking on cellphones is another issue that people are largely aware of and has potentially worse consequences than vehicle tracking.  Cell phones are always on our persons and can reveal the most intimate details of our lives.  The federal appeals court stated, “a person who knows all of another’s travels can deduce whether he is a weekly churchgoer, a heavy drinker, a regular at the gym, an unfaithful husband, an outpatient receiving medical treatment, an associate of particular individuals or political groups — and not just one such fact about a person, but all such facts.” Legal precedent has been able to alleviate some concern for governmental intrusion into cellphone data and tracking.  The fourth amendment would apply here, and in most cases requires a warrant to track an individual’s cellphone location data. 

IV. Claims Professional Response to A.I. and Telematic Liability

With an increase in A.I. and telematics, claims professionals will need to make a fundamental shift in the processing and evaluation of claims.  These claims will require far more technological sophistication.  The claims handler will be well-served by developing a deep understanding of technology and approaching A.I. and other emerging technology claims like complex product liability as opposed to simple negligence cases.  This is because any accident that involved the product could have been caused by its A.I.  Claims professionals will have to be prepared to follow the chain of production for any A.I. sold to determine which point of the manufacturing process may have been responsible for the damages.  Therefore, it will be crucial for claims professionals to find experts for various types of A.I. to analyze claims and determine if the A.I. malfunctioned and who is to blame if it did.  Additionally, claims professionals that cover producers of A.I. products will need to adjust their rates based on how predictable the A.I.’s behavior is and the products’ potential to cause damages if the A.I. malfunctions.

The evolution of technology necessarily results in the evolution of insurance products.  New insurance products are already being developed to respond to the risks associated with artificial intelligence and emerging technology.  Claims professionals will need to keep abreast of the insurance product iterations to conduct a proper coverage analysis at the outset of the claims handling process.  

Like with A.I. products, claims professionals will also need to gather new resources and experts to evaluate the unique dangers IoT devices present.  Claims professionals will not only need to be able to tell if an IoT device’s programming was the cause of damages in a claim, but also if a lack of cybersecurity caused the damages.  Furthermore, because any company could be liable for a cybersecurity breach, claims professionals will need to evaluate the cybersecurity measures companies are taking for IoT devices connected to their network to determine risk and evaluate claims.

Conclusion:

While the introduction of emerging technologies can aid society, it requires sufficient oversight.  These machines are not perfect, and if they are used negatively, can do more harm than good.  As technology grows, it is important that the legal system and the society that manufactures A.I. grow in awareness of the negative consequences such systems may have, and use the oversight necessary to remedy, mitigate, or take accountability for any potential pitfalls.  

 

Rule 11 – An Underutilized Tool – One Defense Attorney’s Thoughts

August 2023 • Source: Elizabeth R. Sharrock, Partner, Rhodes Hieronymus Jones Tucker & Gable, PLLC, with assistance of Scott Love, Intern

What to do?  What to do?  We’ve all heard that the defense bar tends to lag in its response to the latest-and-greatest strategies employed by plaintiffs’ counsel to bludgeon defendants and their insurers into settling claims that are winnable and enhancing jury verdicts beyond something resembling justice.  Think Reptile.  Think unanchored verdicts (i.e. waiving medical specials results in higher verdicts).

What if, in unison and across the country, we were to seek Rule 11 (or applicable state statute) sanctions more often, and even preemptively at the outset of litigation?  What if, in unison and across the country, we conduct discovery in a sort of “Defense-Reptile” fashion – calculated to draw out potentially sanctionable conduct of the opposition.  Of course, we would do so consistent with our ethical obligations and only when the facts of a case so warrant. 

Ponder it.  If we pick and choose our battles wisely, over time we could build a pretty impressive databank of motions/briefs with supporting precedent to share amongst ourselves.  Over time, judges may become less hesitant to issue sanctions.  We just might get plaintiffs’ counsel to more carefully scrutinize the cases they decide to file, and certainly to consider voluntary dismissals and enter into smaller settlements. 

This all came to a head for me as I defend a client in a tragic double fatality case.  There’s more to the story, but I will remain brief in my description.  Simply, a federal agency charged with investigating deaths of this nature, found that another party committed “willful” violations directly resulting in the loss of life.  Yet, no voluntary dismissal of my client has been forthcoming.  Did I provide opposing counsel with a safe harbor warning?  Yes, it was a very detailed roadmap.  Did I file my motion after they failed to cure?  Yes.  Do I know the outcome yet?  No.  But, do I have a pretty good idea as to whether my client will soon prevail, or if the trial court is not quite ready to pull the trigger, will later prevail on a re-urged request for sanctions?  Yes.  Worst case scenario, even if the trial court hesitates to issue sanctions, have I educated the court?  Yes.  Do I have a really solid start on a summary judgment motion?  Absolutely. 

If the facts of the case warrant, the time and expense involved in seeking sanctions is justified.  The act of drafting a detailed warning letter to opposing counsel, and then if necessary a Rule 11 motion, serves to compel deep thought, promotes focus on applicable defenses and potentially necessary discovery, and it causes one to pen the timeline of events and material facts that otherwise must be reported to the client, albeit in a different format.  Time spent is not wasted.

Some clients may be hesitant to pay us to draft “warning shot” letters and Rule 11 motions.  Suggest to them that the exercise serves multiple purposes and actually can function as a cost saving measure.  Surely, if we provide sound reasoning for our request and we obtain approval from our clients, we can pursue this avenue of relief on a more frequent basis.  We can then share our victories in a momentum-gaining endeavor to combat the Reptile and its progeny. 

Food for much thought.

A listing of cases wherein sanctions are discussed is available here.  Who wants to join me in growing this list?

 
<< first < Prev 1 2 3 4 5 6 7 8 9 10 Next > last >>

Page 4 of 33