How C. Diff Infections Decrease with Fewer Antibiotics

The percentage of new Clostridium difficile infections reported in healthcare facilities has dropped for the first time since 2000, says the CDC’s Emerging Infections Program. A sneak peek at the information on C. diff infections from 2011-2014 provided by shows a decrease in the rates of infections in healthcare settings. According to Dr. Alice Guh, a medical officer at the Centers for Disease Control, “Preliminary analyses suggest a 9 to 15 percent decrease in health care [C. diff] incidence nationally.”

But wait! The actual number of C. diff infections is on the rise. In 2011, deaths from C. diff infections reached almost 30,000 people and an additional 500,000 cases of illness were reported. So what does it mean when infections are on the decline in healthcare settings where they are most commonly contracted, yet on the rise elsewhere? Science does not yet have an answer, but current positive results indicate that cleanliness, not antibiotics, is the future.

A Brief Primer

Many of the people who have C. diff in their intestine never develop an infection, because our “beneficial” bacteria in the gut are able to keep pathogens in check, like with candida. If the beneficial bacteria are not able to counteract the c. diff, infections can cause diarrhea, painful stomach cramping, kidney infections, fever, and dehydration in varying degrees. C. diff is also an incredibly resilient bacteria. Spores can last for months outside of the body and can only be killed with bleach, UV cleaning, and other similar methods.

The treatment for C. diff is usually antibiotics, stronger antibiotics, and the antibiotics of last resort. For anyone who is at all familiar with how the gut functions, this is a recipe for disaster. The antibiotics set the gut up for failure by killing the beneficial bacteria that balance gut flora and keep the C. diff in check. Studies have shown that even occupying the same hospital room as someone who has taken antibiotics increases the likelihood of a C. diff infection developing.

“C”-ing a Difference

So what has changed in the last ten years that has yielded the notable decrease of C. diff infection rates in healthcare facilities?

In unsurprising news, the answer is not antibiotics. Healthcare practitioners deliberately limited the amount of unnecessary antibiotics prescribed and instead focused on cleaning and implementing new infection protocols aimed at controlling the spread of C. diff. These changes are also beneficial in lowering rates of other antibiotic-resistant infections and the number of diarrheal deaths in the U.S. overall.

Yet C. Diff Remains a Major Health Concern

Despite that, death rates from infections caused by this particular bacteria are still reaching dangerous and expensive levels. The number of deaths from C. diff infections rose from 3,000 to 14,000 in a period of 7 years, and. As repeated antibiotic use has left us with the hardiest specimens of an already hardy bacteria, the need for personal responsibility in managing C. diff is greater than ever.

Following the example of the healthcare system and restricting unnecessary (or all, if possible) antibiotics while applying best hygiene practices, but these new hospital cleanliness procedures are only a piece of the puzzle in dealing with C. diff and other bacterial infections effectively (spoiler alert: more produce helps!). They are also a piece of the puzzle that will be difficult for the average person to replicate. But there are other ways to reduce the chance of infection developing due to rampant C. diff.

The Strong Survive

It’s simplistic to reduce the fascinating and intricate workings of the gut microbiome to good guys and bad guys, but it’s useful in helping to focus on what matters the most: balance. In nursing homes, as many as half of the residents may have C. diff colonized in their gut. Since not all of those with the C. diff (bad guy) experience infection, something is halting the microbe’s progress.

Enter the good guys – your beneficial microbes. Many of the people, even people living in the same facilities, house the C. diff bacteria with no infection. A resilient, opportunistic bacteria like C. diff is looking for a host it can take advantage of, and a body dealing with a toxic overload with depleted beneficial bacteria is an easy target. Cultivating those microbes by consuming fresh, raw, organic produce and eliminating processed, artificially produced food are the best and most necessary ways to build your body’s natural defenses.

Recommended Reading:

 

 

Sources:



7 Unhealthy Synthetic Dyes and Food Colorings to Avoid and Why

Food dye in some form or another has been in use since the ancient Egyptians to make food look more appetizing. The first synthetic food color was obtained from bituminous coal and introduced in 1856. Today’s food coloring may be more sophisticated, but big food companies like Kraft, General Mills, Campbell’s Soup Taco Bell, and Chipotle are among the businesses announcing that they will be removing synthetic food dyes from many if not all of their offerings. It’s a move in keeping with the increasing demand for less processed, healthier food options while eating out or at the grocery store.

We’ve been eating them forever, and we’re fine…right? Not so much.

Dyes and colors are controversial, and they have been linked to cancer, allergic reactions, and other health issues. Eating something for a long period of time does not automatically equal healthy or safe. Consumers are turning away from processed foods for health reasons, and labels are the best way to see what’s in your food. Here’s what you should be on the lookout to avoid.

Red 3

Red 3, also known as Erythrosine, is one of the most commonly used food colorings. Its signature cherry-pink is found in maraschino cherries, various candies, baked goods, and sausage casings. Derived from coal tar and flourone and sourcing some of its trademark red from cochineal beetles, red 3 has been linked to hyperactivity in children, thyroid tumors, breast cancer, and can damage liver DNA. Since its introduction as one of the 7 approved synthetic colors listed in the Pure Food and Drug Act of 1906, there have been numerous attempts to ban Red 3 from food due to its health risks. Although erythrosine has been banned in cosmetics and topical drugs in the United States since 1990, industry pressure has succeeded in keeping it as an option for coloring food.

Recommended: How Candida Leads to Depression, Anxiety, ADHD, and Other Disorders

Red 40

Touted as an alternative to Red 3, Red 40 is also known as Allura Red or Food Red 17. It is a dark red powder made from petroleum and can contain aluminum, other heavy metals, and cochineal beetles (a common ingredient in red dyes). The most commonly used synthetic food coloring in the United States, it can be found in fruit cocktail, candy, salad dressing, chocolate cake, cereal, beverages, pastries, maraschino cherries, fruit snacks, and many over the counter pharmaceuticals. Products containing the dye are treated differently in Europe, with a required label warning that Allura Red “may have an adverse effect on activity and attention in children.” Children experiencing drastic behavioral changes is one of the biggest health concerns associated with Red 40. Other reported side effects include migraines, jitteriness, inability to concentrate, and upset stomach.

Yellow 5

One of the most controversial of the synthetic food dyes, Yellow 5 or Tartrazine is the low cost, coal tar derived food dye version of beta-carotene. It has been linked to multiple health conditions like hyperactivity in children, severe allergic reactions and rashes, nausea, headaches, and asthma, among others. This connection has led to Yellow 5 being banned in Norway and Austria, while the U.K. government asked companies to voluntarily remove it from their products. This has not stopped the dye from being added to a wide range of consumables in the U.S., like cereals, puddings, frozen desserts, bread and cake mixes, condiments, beverages, chips, snacks, medications, and pet foods.

Yellow 6

Though it is primarily labeled as Yellow 6 in the U.S., this dye actually provides an orange color. Some of its other names include Sunset Yellow, Monoazo, and Orange Yellow S. This dye is banned in Norway, Finland, and Sweden and required to be labeled in the E.U. It’s been linked to adrenal and kidney cancer, diarrhea, vomiting, swelling of the skin, migraines, and worsening of asthma symptoms. The signature yellow-orange of the dye has found its way into foods like boxed macaroni and cheese, chips, bakery goods, cereals, beverages, dessert powders, candies, gelatin desserts, sausage, and some pharmaceuticals drugs. For those who normally avoid foods that come in boxes and bags, Yellow 6 can also be found in preserved fruits, so check labels carefully.

Blue 1

Blue 1, or Brilliant Blue, is the more commonly used of the two blue food dyes approved for use in the U.S. and frequently partners with Tartrazine (Yellow 5) for artificially colored green items. Like many of the other synthetic dyes, Blue 1 was originally derived from coal tar, although now it’s oil based. Brilliant blue foodstuffs like candies, ice cream, liquors, and others are easily spotted, although canned peas, soup packets, and mouthwashes also contain the color. Blue 1 is not as controversial as some of the other synthetic food dyes, but it has been suggested that it causes kidney tumors in mice and hypersensitivity reactions.

Blue 2

Blue 2 is also known as Indigotine, Indigotin, or Indigo Carmine. Most of those names reference Blue 2’s origins as a synthetic version of actual, plant-based textile dye (and color of the rainbow), indigo. The synthetic form of indigo is derived from coal tar or petroleum. In addition to coloring blue jeans, the twenty thousand tons of Blue 2 produced every year can be found in colored beverages, candies, pet food, and pharmaceuticals. It’s linked to brain tumors in male rats, asthma, skin rashes, and mild to severe allergic reactions. Blue 2 is also used to highlight issues in the urinary tract, coloring urine blue and making leaks apparent. This practice has seen dangerous blood pressure increases in some people. Indigo Carmine has been banned as a food dye in Norway, Belgium, Australia, Sweden, Switzerland, France, Germany and Great Britain.

Caramel Coloring

Caramel coloring is not a synthetic food dye in the strictest sense but seeing it listed as an ingredient should still give you pause. Most of the caramel coloring found in select sodas, baked goods, chocolate items, candies, and protein bars is made by treating sugar with ammonia. Needless to say, this can have a carcinogenic effect on those who consume it. Caramel coloring is linked to cancer in animals, and the state of California requires cancer warning labels on products with more than 30 micrograms of caramel coloring in a day. In addition to that, caramel coloring is can be sourced from lactose, barley, or wheat. North American and European caramel coloring are derived from wheat or corn and highly-processed, but that coloring is thought to be “gluten-free”.

Alternatives Abound

We have become accustomed to food designed to delight the senses, and many companies provide that with the cheapest means possible. The recent push to eliminate artificial colors has shown that most food colors can be achieved through other means, like turmeric, beets, blueberry juice, or spirulina. As more people understand that what we eat determines our health will phase out completely. Until then, labels are your friend.

Recommended Reading:
Sources:



Activated Charcoal is Very Popular Right Now – Here’s Why

Activated charcoal is expanding into a whole new market. It’s used in water filtration, as poison control, and in herbal medicine regularly, but it’s increasingly showing up in health and beauty care products. So what is activated charcoal, and why are people suddenly rubbing it all over their teeth?

How is Activated Charcoal Different than Actual Charcoal?

Let’s get the biggest issue out of the way. Activated charcoal (also known as activated carbon) is almost but not quite the same thing as actual charcoal (the stuff used for summer grilling). The bulk material used to produce activated charcoal is some combination of bone char, coconut shells, peat, petroleum coke, coal, olive pits, and/or sawdust, while actual charcoal can include all of that plus agricultural waste and other dry biomass.

To create activated charcoal, carbon is heated to temperatures of 1,700 to 1,800 degrees Fahrenheit with steam or air in an oxygen-free environment, although wood-based activated charcoals frequently endure a chemical process that includes heat and phosphoric acid as well. The high heat “activates” the charcoal, removing the carbon’s volatile compounds and enlarging its internal pores. These enlarged internal pores allow the now activated charcoal to chemically attract and bind contaminants like chlorine, PCBs, industrial solvents, and pesticides, among others.

Related: How to Eliminate IBS, IBD, Leaky Gut

How It Works

When it’s activated, charcoal acquires a positive charge. This is how the activated charcoal captures nasty stuff in our bodies and the water supply. Chlorine, for example, is being replaced in the water disinfection process by chloramine, a chemical that can form trihalomethanes, which in turn can cause cancer. Since that chemical is negatively charged, positively charged activated charcoal is a cost-effective and safe way to improve the quality of the water we use every day. But people also ingest it.

Here’s What to Take Activated Charcoal For

Activated charcoal is recommended for accidental poisonings and certain drug overdoses. It’s also effective in pulling out harmful mercury and aluminum preservatives found in dental amalgams or vaccines and arsenic from rice and other foods. It’s been associated with lower cholesterol, anti-aging properties, better kidney and liver function, and helps relieve gas and bloating by attracting disruptive digestive byproducts.

In personal care products, activated charcoal attracts many of the dirt and oils that can irritate and increase the likelihood of acne, eczema, dry skin, and other topical infections. Bad breath or body odor often occurs when toxins are exiting the body, and activated charcoal can help quickly remove those toxins.

Related: Diatomaceous Earth – Mother Nature’s Secret Weapon: What Is It, How to Use It, Where to Find It

The Forms of Activated Charcoal

Adding activated charcoal to your diet is not a pleasant undertaking. You’re eating ashes and it tastes about as appetizing as you’d imagine. Due to this, most activated charcoal supplements come in pill form. But activated charcoal is also invading other areas. Activated charcoal can now be found in:

  • Shampoo
  • Facial sponges and towelettes
  • Lattes
  • Toothpaste, tooth powders, and toothbrushes
  • Soap
  • Ice cream
  • Skin cleansers

Hoax or Harmless?

Activated charcoal is considered harmless, but there are a few things to consider if you’re interested in trying it. Don’t take activated charcoal with prescription medications and constipation, as the charcoal will exacerbate the issue. It can also cause black stools. The type of raw material used to make your activated charcoal also matters, with charcoal derived from coconut shells generally labeled the highest quality.

The Bottom Line

If we continue to treat the planet as we have been, the amounts of heavy metals, pesticides, and chemicals in our bodies will continue to rise through environmental exposure. Activated charcoal makes the case that we have a solution to at least one aspect of that. If exposure cannot be prevented, then regularly cleansing the body of heavy metals can prevent the kind of buildup that leads to scarier and more serious health concerns.

Related Reading:
Related Products:
Sources:



NSAIDs Study Shows Side Effects are Worse Than Original Ailments

A systematic review of studies that involved nearly half a million people concluded that people who used nonsteroidal anti-inflammatory drugs (NSAIDs) are at increased risk of heart attacks. People who used a higher dose of NSAIDs were at greater risk. The duration of medication use didn’t matter, as researchers saw an increased likelihood of myocardial infarctions after a single day. While we already knew about the negative effects of NSAIDs on the cardiovascular system, this size of this study makes it even more clear how carefully we need to consider the medications we chose.

According to SpineHealth.com, the four most common NSAIDs are:

  • Aspirin: Bayer, Bufferin, and Ecotrin, St. Joseph
  • Ibuprofen: Advil, Motrin
  • Naproxen: Aleve, Anaprox DS, Naprosyn
  • Celecoxib: Celebrex

The Report Card Isn’t Promising

All of the NSAIDs looked at correlated to an increased chance of heart attack, and the percentage of increase ranged from 20 to 50 percent. Since the last use of the drug, risks decrease over time.

According to the lead author of the study, Michèle Bally, the absolute increase in risk is quite small. Which makes sense, as the risk of heart attack for most people is small. But this isn’t the only bodily system that NSAIDs don’t agree with.

High dosage or prolonged use of NSAIDs can result in chronic kidney diseases like chronic interstitial nephritis. While NSAIDs are not as likely as acetaminophen (Tylenol) to cause liver damage, they have been associated with ulcers or gastrointestinal bleeding in large doses. If you’re keeping track, that’s potential damage to three of the most important systems in the body, cardiovascular, urinary, and gastrointestinal.

Saving for a Rainy Day

People seem comfortable using NSAIDs for everyday aches like joint pain, headaches, swelling, and fevers. So is it worth it? Not with more sustainable and healthier options available, like the recommended reading below.

Recommended Reading:
Sources:



Safe Seafood and What to Avoid in 2017

Roughly half of all of our seafood is from farmed sources. That isn’t inherently a bad thing. Farmed fish seems like a logically, responsible consumer choice. The problem is that modern agriculture’s ability to slowly strip away as many nutrients from our food as possible while making our food toxic, and causing irreversible environmental damage, is not exclusive to land.

Wild-caught seafood is also problematic. Hopefully, we all know how bad the condition is in our oceans. In addition to overfishing certain species to the point of potential food chain collapse, wild caught seafood frequently comes with mercury or PCBs (an industrial chemical). For those who like to eat fish or appreciate an omega-3, DHA brain boost, is does our current seafood model offer anything left to enjoy or worth preserving?

Here’s Where You Are

Much like land-based agriculture, the best way to ensure the quality of your seafood is to do it yourself or get it from a trusted individual or company (although the latter is less likely). Unlike agriculture, opportunities to exercise personal seafood quality control are few and far between. Many don’t live near water, and those that do should be cautious of eating local fish due to PCBs, mercury, DDT, and other chemical runoff. You could farm your own tilapia…but this isn’t feasible for almost everyone. With those eliminated, the question is: farmed or wild caught?

The Elephant Under the Sea

Much time has been spent discussing the differences between farmed and wild caught fish, and everyone agrees that farm raised fish is fattier than wild caught. From a health standpoint, fatty acids are the best reason to eat fish. But this doesn’t mean that farm raised fish are better for you, as the fatty acids in tilapia are primarily omega-6s and an excess of those are more likely to increase cardiovascular risk than boost brainpower. Farmed fish are also fed a completely unnatural diet from grain to chicken meal to other fish meal to other animal waste products. This often results in fattier, less nutritious seafood with more chemical residues than wild caught fish (though wild caught fish does have high levels of mercury). Neither option is a slam dunk, but farmed fish are more likely to cause long-term health issues.

Seafood Safety List

There’s really no way to eat guarantee that your seafood dinner will be healthy and sustainable, though most seafood falls into one of three categories – safe and sustainable, unsafe, or unsustainable.  Safe and sustainable seafood is the best possible type of seafood to consume, as it is less likely to have high levels of mercury, PCBs, and harvested in a way that doesn’t damage the ecosystem.

Safe and sustainable seafood is the best possible type of seafood to consume, as it is less likely to have high levels of mercury, PCBs, and is harvested in a way that doesn’t damage the ecosystem. Location matters quite a bit when looking for sustainable fish populations, but some of the more common examples of this category include:

  • oysters (farmed or not)
  • Pacific sardines (wild caught)
  • Atlantic mackerel (wild caught)
  • clams (farmed or not)
  • Alaskan salmon

Unsafe fish are fish that are likely to have high amounts of mercury and PCBs and should be avoided. These fish are generally bigger in size, as their longer lifespan allows for a greater build-up of contaminants. They are not necessarily sustainable, but some of these fish include:

  • shrimp
  • swordfish
  • tilapia (farmed)
  • Atlantic cod
  • shark
  • big-eye tuna
  • ahi tuna
  • Atlantic salmon (farmed)

Unsustainable fish are the fish that are overfished, in danger of disappearing or cause environmental devastation through the way it is harvested. It’s debatable whether any seafood is truly sustainable at this point. Regardless, some of the worst offenders when it comes to sustainability are:

  • Chilean sea bass
  • all tuna
  • orange roughy
  • red snapper
  • Greenland halibut
  • swordfish
  • Atlantic sea scallops

The most popular fish at your average fish counter are usually shrimp, tuna, salmon, and tilapia. None of those are an ideal choice. The ideal choice is likely something smaller, wild caught, and from fisheries in the Pacific. But that can be difficult to find at the local fish counter. Finding sustainable and healthy seafood is already a difficult and time-consuming prospect. Is it likely to get better or worse?

What Sustainability Looks Like

Here the biggest seafood issue today: sustainability. Sustainability is choosing seafood that brought to market while considering the long-term health of that particular species and the overall health of the ocean. There are several organizations, like Seawatch or the Marine Conservation Society, dedicated to determining which seafood will have the least impact on ocean health. But right now that doesn’t really make a difference. For our fish consumption to be at a level the ocean can sustain, at least one out of every two people needs to stop eating seafood completely. Until that happens, there really isn’t a guilt-free way to enjoy seafood.

Related Reading:
Sources:



The Acne Vaccine Is Coming

Scientists don’t know the cause of acne. They know the bacteria Propionibacterium acnes (P. acnes) is partly to blame, but beyond that, they aren’t sure the cause of the skin condition that affects nearly 50 million Americans every year. Experts can’t be sure as to P. acnes’ role in acne, as the bacteria is also a part of beneficial processes in the body, like secreting digestive enzymes. That doesn’t matter, though. What matters is that there are 50 million people a year diagnosed with acne every year, and someone needs to fix it, quick. Enter a new vaccine.

Which One of These is Not Like the Other

At the University of California San Diego, researchers are developing a vaccine for acne. Lab tests on skin biopsies collected from test subjects suffering from acne have been positive, and the researchers are anxious to begin the next stage of trials. But if they don’t know the definitive cause of acne, what are they vaccinating against? According to the lead researcher of the project, Dr. Eric Huang, “Acne is caused, in part, by P. acnes bacteria that are with you your whole life — and we couldn’t create a vaccine for the bacteria because, in some ways, P. acnes are good for you…But we found an antibody to a toxic protein that P. acnes bacteria secrete on skin — the protein is associated with the inflammation that leads to acne.”

What is Being Treated?

So this vaccine is targeting a protein that causes inflammation that leads to acne. Here’s where the idea goes off the rails a bit. Vaccines are specifically designed to elicit an immune response, which inflammation is. Basically, scientists are hoping to treat inflammation with a different kind of inflammation. That kind of treatment makes sense if you’re thinking like a five-year-old who cleans their room by shoving everything into the closet or under the bed.

What if a pimple was treated as a sign to take care of your body? When the body (kidneys in particular) are not processing waste out of the body quickly enough, excess toxins push out through the skin. When that waste gets clogged in pores and infected a pimple forms. That’s a lot of steps that happen before the pimple forms, and with each step, there’s an opportunity to put safeguards in place. To focus on fortifying and strengthening your system with fresh, raw, organic produce while eliminating boxes, bags, and sugars? Diet is the foundation of health, but no one is perfect. Pimples can be a sign to look at what you’re eating and dial things back to healthy.

Forcing Nature

You have a pimple and you want it gone. The vaccine can accomplish that. But at what cost? You have a pimple for a reason. Scientists don’t know the cause of acne. If this vaccine is brought to market it will likely be labeled not harmful to humans. But there won’t be any long-term studies. The side-effects will likely be worse than what the vaccine is meant to fix. Those promoting it will never look beyond conventional medicine to actually address the cause of infection. After all, it worked so well with antibiotics.

Recommend Reading:
Sources:



Antibiotics and Rectal Cancer – There’s a Connection You Should Know About

Colorectal cancer rates are more than doubling in a particularly disturbing way. A new study from the American Cancer Society (ACS) analyzing cancer occurrences discovered that diagnoses of colorectal cancer have increased every generation since 1950. Scientists are unsure of the cause. But luckily for them, another new study has discovered that prolonged use of antibiotics can be linked to the increased the likelihood of bowel polyps, a precursor to rectal and colon cancers. Together, both studies make a compelling argument for the long-term consequences of our current antibiotic use and food system.

They Get Younger Every Year

Cancer is now a fact of life. The likelihood that you or someone you know well has been diagnosed with cancer is already high, and the number of cases diagnosed is predicted to rise to more than 21 million people in 2030. Colon and Rectal cancers are some of the most common cancers, 90% of colon and rectal cancer cases occur in people over the age of 50. According to Rebecca Siegel, the lead author of a new study from the American Cancer Society, “People born in 1990…have double the risk of colon cancer and quadruple the risk of rectal cancer” compared to the risk someone born in 1950 faced at a comparable age.

The American Cancer Society expects to see 13,500 new cases of colon and rectal cancer in people under 50 in 2017. At this point, someone under 50 is more likely to be diagnosed with colon or rectal cancer than the likelihood of anyone of any age is to receive a diagnosis for a less common cancer like Hodgkin’s Lymphoma. The conventional Western diet induces and intensifies inflammation in colonic mucosa within two weeks of being on a conventional Western diet, and that inflammation is one of the precursors of cancer. While the study from the ACS does address diet briefly, it also gives the impression that cancer is something that is just going to happen. But this is where that other study comes in.

Polyp, Polyp, Polyp

For every disease or condition that develops in the body, there are warning signs as it develops. For rectal cancer, one of those signs is small growths on the lining of the bowel known as polyps. Not all bowel polyps are cancerous, but they can develop into cancer later if the issue isn’t addressed.  But how do the polyps get there? A study looking at data from a long-term nurses’ health study found that nurses from ages 20 to 39 who had taken antibiotics for at least two months were later in life more likely to be diagnosed with adenomas (a specific type of bowel polyp) than nurses who had not taken antibiotics for a sustained period of time.

Polyps are not the same thing as rectal or colon cancer. It’s likely that if you have polyps that you will never notice their presence. But imagine you took antibiotics for a prolonged period from the ages of one to twenty. Then you took antibiotics for a prolonged period of time from ages twenty to thirty. You also have to contend with the possibility that you are ingesting a steady stream of antibiotics if you consume conventional meat regularly. Each time you come into contact with antibiotics for an extended period of time, the likelihood of developing adenomas increases.

The Steps Are There

These two studies make a compelling argument for the management of cancer risk through lifestyle. Choose to limit antibiotics in the food you eat and medicinal purposes, either through vegetarianism, veganism, natural remedies, or informed consumption. Replace typical health pitfalls like a sedentary lifestyle and a conventional western diet with regular movement and load up on fresh, raw organic produce. With each positive choice, the likelihood of rectal cancer (or any cancer at all) decreases. There is no cancer treatment available that can replicate the benefits of taking care of yourself first.

Sources: