Currency Adjustments and “Contagion”

Today I want to talk about debt. Not the kind you think about when you see headlines about student loans, credit cards, or the federal budget. No, this is a very special kind of debt, a kind you likely will hear nothing about, but a kind of debt that has a tremendous amount of influence around the world.

Let’s start in broad strokes. If you’ve read my previous blogs, or any article about floating currencies (fiat money, for the gold bugs out there), then you will know my thoughts on the state of the U.S. Dollar. The USD is the most stable currency in the world today, and it gets that ranking from both its Liquidity (the frequency that it is used) and its Notoriety (the amount of people and institutions that recognize its stability, and thus use it). Most
global energy transactions are made in USD, most financial clearinghouse activity is done in USD, and almost every country in Earth recognizes it as, for better or worse, a stable currency.

This blog post is not about America, however. It is about the rest of the world. How do other countries’ currencies compete with the Dollar in a floating currency system? Do other countries attempt the same industrial booms that America had? What about the same debt issuances, fiscal policy, or foreign direct investment?

The answer, as with most things financial, eventually comes back to the U.S. Federal Reserve. The Fed sets monetary policy for the U.S., which either makes the USD stronger or weaker relative to other currencies. When the USD gains strength, more people will want to hold USD, and when the USD weakens, people will want to sell. Recall that a stronger USD means imports to the U.S. will cost less, as a stronger dollar will be able to purchase more of a foreign currency for each dollar spent. However, a stronger USD also makes exports more expensive, as the stronger USD will push up dollar-denominated prices of goods, and other countries will want to buy less. It’s important, though, to think of currency strength not on a numeric scale. One Dollar will only ever be worth one
Dollar. Its strength or weakness is a result of how much in goods, services, or other currency it can purchase. By that metric, currencies can strengthen or weaken as needed to respond to a growing global marketplace and changing resources.


                                                               Monetary Policy Overview
For a more in-depth explanation of how all this works, the Federal Reserve sets U.S. interest rates, through the Federal Funds Rate (FFR). The FFR is the overnight interest rate on loans from Bank to Bank, set to an annualized percentage. If JPMorgan Chase, for example, needs to borrow money in anticipation of a large cash withdrawal in a few days, they will want to borrow from the Wells Fargo down the street, but Wells Fargo will not want to lend that money if they do not receive interest in return. That is the FFR in a nutshell, and the Fed raises and lowers it in response to economic strength and weakness, but based on consumer trends.

In a strong economy, theoretically, consumers will have more disposable income and will be spending a healthy amount. The Fed wants to protect against the economy getting too hot (and runaway inflation caused by overspending), so they will raise the FFR, thus pushing up the borrowing costs of banks and businesses, and also pushing up the effective interest rates paid on bonds. Consumers will then, theoretically, want to save more of their money by buying those bonds, and economic activity slows.

Conversely, in a weak economy, consumer spending will be weak, and saving will theoretically be strong. The Fed will lower the FFR in this situation, in order to entice people to save less money, and spend more. Lowering the FFR will lower the borrowing and lending costs to banks and businesses, letting them respond more effectively to consumer trends and letting the economy pick up steam.

The Fed has announced in recent years that their “normal” desired FFR is around 1% higher than the inflation rate, which as of August 2018, has come in at 2.2%.


                                                                 The Central Issue
The world is an ever more connected place these days, and that includes the financial world. Recall that investors, businesses, and governments from all over the world acknowledge the stability of the USD. Other countries (with the USD as not their local currency) can, for purposes of this blog, be lumped into three groups; Developed Markets, those that have strong currencies of their own and compete with the USD, Emerging Markets, those that have less pronounced, often pegged currencies to the USD, and Tertiary Markets, those with the weakest economies and most volatile currencies. For this blog post, we will focus on Emerging Markets.

Emerging Markets have seen an explosion of economic growth over the past 20+ years, due to a multitude of factors. These include, but are not limited to, easier access for developed-market investors to invest more directly in them, increased media coverage and grouping of certain countries (The African Union, BRICS, ASEAN, etc.) and overall liberalization of eastern markets. But no factor has had as much impact, nor been as lasting to long-term economic success of developing nations, than cheap debt. Loose monetary policy and cheap debt has been a tailwind to global growth for decades, and has allowed otherwise struggling countries, rife with political and economic turmoil, to reap the benefits of virtually unlimited investment, near-free money, and confidence that the good times would keep on rolling.

I often talk about secular trends in western civilization being responsible for lower interest rates, those trends being an aging population and falling birth rate, stagnant or decreasing real wages, and asset prices (stocks, homes, etc.) rising faster than the rate of GDP or wage growth. Think about this in tangible terms. You work as long as you need to, but your income doesn’t seem to rise much year to year. You look around, and everything is financed nowadays. Average home values have climbed in the last 30 years from under $200,000 to over $300,000, college tuition has more than tripled, even your phone costs more than $1,000. Rising costs for such common expenses pushes them to be paid in installments, and those have interest rates attached. As debt levels rise among
citizens of the West, the interest rates necessarily have to decrease in order to keep affordability. Since western economies make up such a large portion of the global economy, this dictates a lower overall world interest rate, and makes debt cheaper.

Emerging markets will want to take full advantage of this situation, because to them, it could not be better. Countries that can experience massive growth rates do so with mass influx of construction and industrial activity (the American way!) and that takes a lot of capital. Some of the most popular emerging markets are Turkey, Argentina, South Korea, and Indonesia, and I’m sure you’ve heard at least a little about the growth they have
experienced in the past few decades. How do you think they achieved it? Massive capital inflows, often, to their long-term detriment, denominated in USD.

Remember when I said the USD was the most reliable, stable, liquid currency in circulation? It holds true even in those emerging markets. Debt can be issued, bought, sold, and paid easiest when it is denominated in USD. It’s how investors in New York can lend money to Turkish construction companies, or South Korean software companies, or even Chinese bank notes. It’s how financial institutions can transact with each other without foreign currency adjustments getting in the way of quarterly earnings. But at its core, it’s most definitely how countries with no vibrant economy or stable currency of their own, can “borrow” one, to prop themselves up and propel massive growth.
                                                                         


                                                                          Contagion
Here’s how this works for governments. Say Turkey wants to undergo a massive infrastructure plan involving a lot of companies building new buildings. The project will cost, say, 5 billion Turkish Lira. Turkey does not have the money to cover this cost, and they have a choice to make. They could issue Lira-denominated debt to cover the cost, but that comes with a 24% interest rate paid every year. They could alternatively use their Lira to buy Dollars, and issue Dollar-denominated debt at a 5% interest rate. U.S. monetary policy is loose, and interest rates are still at historic lows, remember? Naturally, Turkey would choose the latter option. The effect is twofold. Dollars exit the U.S. and enter Turkey as a Foreign Exchange Reserve, which buoys the value of the Lira (because, theoretically, Turkey could sell its Dollars to keep its liquidity of Lira) and also allows Turkey to take advantage of the far lower U.S. interest rate. If the debt is rolled over every expiry, then demand for that debt will be high, and Turkey will experience a lot of growth as a result.

It works in a similar way for businesses. Say Turkey wants to infuse its private sector with a ton of cash in order to juice its economy. Countries do this all the time, in the form of tax cuts, tax rebates, sovereign wealth funds, or even helicopter money. Turkey wants to lend its businesses 50 billion Lira. Turkey only has 5 billion Lira, and has a choice to make. It could lend the 5 billion Lira now and accept a lesser result, it could invest the Lira in the hopes of having a larger amount in the future (but not lending it now), or it could buy Dollars with that Lira, issue Dollar-denominated debt to lever themselves and receive many more Dollars, and sell those Dollars to buy the 50 billion Lira it wants to lend. Businesses within Turkey’s domestic economy can only transact in Lira, so they cannot use Dollars. The effect of this is once again beneficial to Turkey. 5 billion Lira used to purchase Dollars results in leverage that allows Turkey to “borrow” the ability to access much more of its currency, as a much lesser cost. Once again, Turkey is taking advantage of lower interest rates and cheaper U.S. debt to grow its own economy.

Sounds like a win for Turkey, right?  Seems like a true “miracle” situation, one with no downside, right?

Surely you can see a glaring flaw in all this. The system only works when dollar-denominated debt has a lower interest rate, or put simpler, when the USD is cheaper. For the last 30 or so years, this has been the case, with the massive interest rates of the 1980’s giving way to the Greenspan-led mass lowerings of the 1990’s, the teaser rates of the 2000’s, culminating in the all-time-low interest rates in the 2010’s of 0% – 0.25%. As you can imagine, zero is the “effective lower bound” here, as money cannot get any more free than “free”. But what happens when U.S. interest rates, finally start to rise?


Recall that the Fed raises interest rates to combat inflation and strengthen the Dollar. When the Dollar strengthens, it can buy more foreign currency, and foreign currency can buy less of it. This is where we begin to see the concept of Contagion. In our Turkey examples, for both government and business, growth in Lira terms is being fueled by Dollars, and in both instances, Turkey needs those Dollars to be as cheap as possible so they can keep buying them and issuing debt. If their liquidity dries up, and they can no longer buy as many Dollars with their Lira, then investors quickly lose confidence in the ability of the Turkish government to satisfy its debt obligations, and those investors will not buy Turkish debt. That forces Turkish debt to become more desirable to compensate, and it does this by pushing their interest rate higher. If it gets high enough, then Turkey will not be able to pay its debts, it will lose the ability to attract foreign capital, and its economic growth will flatline.

You’re probably thinking to yourself, surely this isn’t as big a problem as you’re making it out to be? A percent or two of interest won’t make or break an entire country’s economic performance, right? I say to that, look to history as the judge, specifically the events of 2007-2009. The Fed’s Quantitative Easing to combat the Recession had the double impact of shoring up debt markets (by buying distressed mortgages and bonds, eliminating much of the bad apples from the tree) and flooding the U.S. financial system with trillions of new, cheap, accessible Dollars, in the hopes that banks would lend them out and get the economy moving again. The net effect was an all-time-low interest rate (mentioned above), and emerging markets took full advantage of this. Liquidity was so high, and they could afford to borrow so much, that the situation had the look of a Liquidity Trap, when debt costs are so low, and so much is borrowed essentially for free, that when rates finally go up, borrowers have no way of paying it back.

This is what I believe has happened, and what is beginning to finally show, in emerging market economies. U.S. interest rates are about to pass the 2% minimum threshold, and the Fed is continuing to signal that the rate hike cycle is nowhere near over. As the Dollar continues to strengthen compared to more volatile, weaker emerging counterparts, borrowers of Dollars will not have a way to repay their debts. This will lead to massive capital flights from those economies, economic contraction, and continued divergence of the West, and the U.S. in particular, as the sole economic choice of the world’s money. So the next time you turn on the financial news and see the word Contagion, remember what it means, and keep your money close.

Advertisements

To Insure Promptitude, Not Wages

My name is Ian, and I hate tipping at restaurants.

Did that get your attention? Ready to call me rude? Callous? Unwilling to accept the struggle that service industry workers go through every shift? Before you jump to any conclusions, just hear me out.

The service industry in the United States, in my opinion, has become a charade, a parody of work. Hundreds of thousands of people willingly go to work every day, knowing that their employer pays them far less than the federal minimum wage, and accepts this as the norm, with the hope that some generous customers will tip them enough to make up the difference. It’s seen in many different lights. Either this version of work is societally acceptable, with the mandate of the customer tipping their server 15%-20% of their check is seen as an act of good faith, or it’s seen as wage theft, where the ingrained responsibility of the consumer to tip their server has replaced the employer’s obligation to pay a federally mandated wage.

This post will explore the history of tipping in the US, and what would happen if service employees in restaurants were paid the same standard of wage as anyone else.


Why do we tip?

We all know to tip, but why do we physically do it? Apart from the societal need for our servers to actually live… is there something from history? As it turns out, the reason we tip comes directly from our past.

The American version of tipping wasn’t popular in the States until the late 1800’s. Before that, it was a way for Western Europeans in various aristocracies to show off to their peers. A T.I.P. (literally meaning To Insure Promptitude) was left before a meal or drink, so servers would be faster in bringing items out to customers. Wealthy Americans didn’t start using this payment method until just after the Civil War, in the late 1860’s, when they began making trips to Europe and seeing it for themselves.

Shortly after the end of the Civil War, when the South was going through Reconstruction, there was a quandary regarding how recently freed slaves should be treated in society. Legally they were freed, and therefore deserved to be paid for their work, but white owners of businesses did not want to pay them. Bring in the tip! By compensating low-skilled work (most often in the service industry) through customer gratuity, rather than through the fixed shackles of a wage (hope you get the sarcasm), workers have the potential to increase their earning power many times over by simply providing great service. This concept weaved its way through history, but likely not even the creators of the idea would realize what a national trend it would become.

Eventually, amended into the Fair Labor Standards Act in the 1990’s was the penultimate warping of this concept, into the two-tiered minimum wage system. Whereas the regular federal minimum wage currently sits at $7.25 an hour, the FLSA established provisions for a wage far below that level. Workers that receive more than $30 a month in tips qualify for a reduced minimum wage, of just $2.13 an hour, with the idea being they could make up their other income with their primary business activity, literally Insuring Promptitude. But making what was once simply a mindset into federal law had a two-pronged effect. It provided service employees with the opportunity to boost their earnings, sure, but it also allowed the people of America to steadily forget why they tip, and simply feel that they must tip.

My position on this issue is very simple. I, as a customer, do not want to directly pay the salary of the employees of the restaurant I eat at. My position is not against all of tipping, if my server does an excellent job then they deserve a tip. But forced tipping of 20% or more simply because the restaurant does not pay its workers a minimum wage, that I do have a problem with.


What if restaurants paid the real minimum wage?

Take an example, Restaurant X. X makes $25,000 a month in revenue, and let’s say after its food, drink, and management salary costs it has $3,000 to spare. X employs 8 servers for the month and pays those servers the tipped minimum wage of $2.13 an hour. For those 8 employees working 40 hours a week for 4 weeks, total wage expense to the restaurant is $2,726.40, and they have $273.60 left over in Net Income. If they were to pay the regular minimum wage of $7.25 an hour, wage expense for the month would be $9,280, resulting in a net loss of $6,280 for the month. From a flat comparison standpoint, as you can see, bringing wages up to the real minimum would not work.

Let’s imagine that our Restaurant X sells chicken and steak dinners and charges from $12-$24 for its entrees. The restaurant makes $25,000 a month, let’s say 30% of that is in appetizers, sides, and drinks. So 70%, or $17,500, is made with the entrees. Restaurant X would have to seat 973 different customers in a month to arrive at this number, with an average entrée value of $18. If the restaurant were to raise their food prices $3, making the new range of prices $15-$27, and they seat the same number of customers, they would make an additional $2,919 a month, not including any changes to their other food or drink items.

I will not continue going down the line until our hypothetical restaurant breaks even, but you can see how by raising their prices a marginal amount, or by making manageable investments into their ambiance and experience (for example, dim outside lighting and some foliage to stimulate a more formal evening crowd), our restaurant can make up the expense for paying their workers fairly. Yes, it can be done.


Is there any real-world evidence?

As recently as 2009, some big-name restaurant chains have experimented with raising their food prices and ending the practice of forcing their employees to rely on tips. Joe’s Crab Shack attempted this for three months before ending it, presumably because they could not continue the changes in many areas in which they operate. And largely, it does depend on the area. But restaurants all over the country are starting to buck the trend.

Take Packhouse Meats, in Newport, Kentucky. Packhouse decided to cut out tipping and instead, pay its servers a minimum wage of $10 an hour. This is high for industry standards, but Packhouse realized that several problems with the tipping system were resulting in a high turnover rate for its employees, and that cost a lot of money. They raised their food prices almost 20%, but on the flip side, encourages its customers not to tip. Ultimately the resulting money paid by the customer is the same, whether it be the old food price plus a 20% tip, or the new 20% higher food price with no added tip. In my opinion, this is a far fairer way to treat every stakeholder who participates in the restaurant system. The customer knows what they are buying and how much they are paying, the employee knows how much they are making and can still collect a tip if their service is truly exceptional, and the restaurant suffers little to no net change in margins. It should be noted for the critics of my position that Packhouse has since converted itself into a food truck, but it is unclear whether this was due to their wage policy or something else.


What can we do?

Demand side economists believe that wages are sticky. This means that they are affected by two factors, how much employers are willing to pay, and how much employees are willing to bargain. Once those two factors reach a median, the wage is set, and typically does not fluctuate much. Tipped wages are so low because tipped workers have let themselves believe that they do not deserve the same wages as everyone else, and feeds into the narrative of customers paying their wages for them. If you want this to change, then there’s a few things you can do.

If you work for a restaurant, start voicing your concerns to your employer. One voice is nearly powerless in the modern day right-to-work economy, but here’s a motto I often repeat to people in the industry: If every server in America got up and walked out the door because they were not being paid fairly, employers would have to pay them fairly or they would close their doors. Unions may be dying in this country, but that does not mean workers have no rights and need to accept what their boss gives them. If you eat at restaurants frequently, talk to your server about the idea of tips being built into food prices instead of a post-meal term of endearment.

In summary, tipping has undergone a transformation in America, from a ritual that rewarded service employees for exceptional job performance, to an excuse for restaurant employees to legally underpay their workers and leave customers societally forced to foot the bill. I believe that paying restaurant workers more fairly, by increasing the prices of menu items by a commensurate amount, provides greater transparency in the dining experience, leaves customers with the ability to still leave a tip for great service (its original purpose, after all) and most importantly, takes the pressure off the customer to subsidize the restaurant’s failure to pay its employees fairly.

Do you have any thoughts on this topic? Think I’m wrong and want to tell me off? Feel free to leave a comment on this post or shoot me a message with your thoughts! Thank you for reading, and stay tuned for much more!


Sources

Packhouse Meats: https://thinkprogress.org/these-waiters-dont-make-tips-instead-they-make-15-an-hour-c9d3b6e94778/

Tipped Minimum Wage: http://www.businessinsider.com/history-of-tipping-2015-10

Fair Labor Standards Act: https://www.dol.gov/whd/flsa/

Gun Violence: An Economic Problem with an Economic Solution

Guns in America are a uniquely polarizing issue. Some people are repulsed at the very sight of a gun, while others are in love with them and the culture they represent. Surely they have important importance throughout our history, but gun violence simply cannot be ignored. Almost all Americans agree that gun violence is a scourge on the country. You have probably heard these countless times, but some facts to consider:

  • The US accounts for 82% of the world’s gun homicide deaths
  • There have been over 12,000 gun homicides in the US annually since 2012
  • Americans own about 48% of all civilian-owned guns on Earth

(statistics taken from www.everytownresearch.org)

I am not writing simply to regurgitate depressing gun statistics. Instead, I argue that gun violence in America is an economic problem as much as it is a societal one. After I make my case, I will propose an economic solution to the growing problem, that does not infringe on any constitutional right, but targets specific economic factors related to America’s unique obsession with guns.


 

It is not my intention to belittle or otherwise misuse any gun-related tragedy in order to make this case. However, I will be examining the most recent shooting and analyzing its economic impact to determine the ‘cost’ of such an event. The shooting occurred in Parkland, Florida (coincidentally, about 15 minutes from where I work). Often times we do not even think about the cost of things like human life, but I will endeavor to go over a few of those costs.

Parkland is an affluent suburb outside of Fort Lauderdale, Florida, so I will make a few assumptions (constraints). First, I will assume that the victims’ parents would be putting money away for their college years, rather than having to take out large amounts of loans. Second, the cost of living is somewhat above average. Third, the State of Florida has offered to pay for all final expenses as well as medical care, so I will assume that those pledges have been honored.

Now that we have our picture somewhat refined, let’s apply some reference data to outline our case:

Great, we have reference numbers to apply to this shooting. Now let’s run through the situation to apply the numbers.

  • 17 students were killed. Final expenses for these victims are $7,509 per victim, or $127,563 in total. In addition, 529 savings will have to be withdrawn by the parents on account of no longer being qualified (their purpose for being tax-deferred is no longer valid), subject to a 10% early withdrawal penalty. The total 529 money saved up is $476,425, so a 10% penalty would be $47,642.50. The economic impact of these two costs alone nets over $175,000.
  • 14 students were wounded and are still in the hospital recovering. Time spent in the ICU alone is $2,068 per day, for 5 days now, and we can assume they will have to stay in recovery for at least one more day (https://www.hopkinsmedicine.org/news/media/releases/gunshot_stabbing_victims_are_recovering_without_exploratory_surgery_study_shows). This comes out to $173,712 in hospital costs. In addition, the initial operation itself, as well as additional tests (CT scans, X-rays, etc.) total over $10,000 (anecdotal evidence at http://www.govtech.com/em/safety/The-Bill-for-Treating-a-Gunshot-Wound-21000-for-the-First-35-Minutes.html). Using $10,000, we arrive at a net of $140,000 for these victims. In total, these two costs for the wounded come out to just over $313,000.

There are obviously many more variables to consider, but add up the total cost for both wounded and dead victims and you will arrive at around $488,000, in just these specific categories of costs, for just this one isolated mass shooting. Would you be surprised to know that the Parkland shooting was Florida’s third mass shooting in three years? Fort Lauderdale International Airport suffered a shooting with five casualties on January 6, 2017, and the prior June, the Pulse Nightclub shooting in Orlando claimed 49 lives and wounded 58 others. Clearly, this is a far bigger economic problem than we thought.


 

How does any of this get paid for? Parkland is a unique example, with the State of Florida offering to pay for all final expenses and the socioeconomic status of the area suggesting most families carry health insurance. But what about all the other school shootings happening across the country? It would be foolish to assume the same applies to them. And there have been so many over the past few years, so many that I will not list them here.

Should the victims’ families be fully liable for all costs incurred because of a shooting? Why are the gun manufacturers totally removed from shootings, with nothing to lose when tragedies like these happen?

Today I propose a way to even the financial playing field, in a way that both forces manufacturers of guns to be (at least marginally) responsible for the atrocities people commit with their products, and that incentivizes good, responsible gun owners to be more proactive in demanding constructive change from their respective Congressmen and lobbyists. What I propose today, is Gun Insurance.


 

Gun insurance would work in the same functional capacity as car insurance or health insurance; it would shield the payer from far-reaching financial exposure to a catastrophic event, in exchange for a small monthly premium. Gun insurance would emulate various factors from both other types of insurance. Whereas car insurance is mandatory upon purchase of a car, being necessary to protect the driver from any large cost associated with adverse outcomes of operating a dangerous instrument, I would like gun insurance to function the same way. Whereas health insurance has variable premiums for different demographics of buyer, gun insurance can apply different premiums to different age ranges, prior criminal history or health issues (a ‘preexisting condition’), etc. Ultimately, I believe this is what is needed.

Roughly one quarter of Americans own at least one gun, that’s 81.5 million people. Nearly all, if not all of them, are responsible, take care of their weapons, and would never think of committing a mass shooting or other atrocity. But whether you believe the gun laws in this country have to change, or whether you think the mental health issues have to change, there is only one way any of them can really change. For the last decade plus, organizations like the NRA have successfully lobbied the Congress to not pass any significant gun reform legislation. Whether or not you think that is a problem, the fact remains that lobbyists against gun reform are extremely effective at influencing Congress to not take action in the face of increasingly overwhelming calls to action. Therefore, I submit to you that the onus of change in legislation, be it gun reform or mental health reform, is on their shoulders.

Gun Insurance would force these organizations and lobbies into action through financial incentive. Currently, if there was an automobile accident, the victim’s family is able to sue the auto manufacturer. If there is medical malpractice, the victim’s family can sue the hospital. But if there is a shooting, the victim’s family cannot sue the gun manufacturer. Insurance would change that twofold. Gun manufacturers would be subject to the same scrutiny over misuse of their products as automakers or drug companies, and would have to dole out cash settlements over cases in the same manner. All legal gun owners would be charged a premium, the same way all car owners are charged for insurance. In a year with no mass shootings or extraneous gun deaths, the manufacturer would not have to make any payments, so the premium would surely be infinitesimally small, say, $5 for the entire year. 81.5 million gun owners each paying only $5 a year yields $407 million in premiums to the gun manufacturers. But in today’s environment, where the quantity and severity of mass shootings seems to increase every year, gun owners would feel the pinch.

To be clear, my intention with this idea is absolutely not to penalize responsible gun owners for the acts of a few deranged monsters. I am predicating this solution on two factors. One, that nearly all Americans agree that something has to be done legally to solve the growing problem of gun violence, and that the gun lobby is the most effective agent to create that action. Two, that the surest way to make a presence felt is through someone’s wallet. Currently, the gun manufacturers and gun activist groups are shielded from negative effects of these events because they are not hurt financially. Gun insurance would allow them to bleed, the same as us normal citizens, whenever an atrocity like this occurs, and it would incentivize change to happen in the most organic way, from the bottom up.

Gun lobbyists would likely not seek to change gun laws, even if they were being financially squeezed. That would drive down availability of their products for demand-rich consumers. Instead, they would likely seek to change the country’s mental health laws. Are you upset that certain politicians blame mass shootings on mental health, but seem to not do anything to actually address mental health problems? This would be a way to change that. Again, regardless of what legislation you would like to see passed, influencing the people that control legislation is the best way to achieve something. Maybe not the optimal change, or the change any of us personally desire, but something. Something would be a significant improvement from what we have now.


 

If you’ve read this far, congratulations, you’re finally finished! I know this was on the depressing side of economics, but it’s important to know that everything in life has a cost, and a way to address that cost, if we think hard enough. As always, if you have any questions or comments, please leave a comment on this post or send me an email!