Should-Read: CPPC: Victory in California! Drug Price Transparency Bill Becomes Law

Should-Read: CPPC: Victory in California! Drug Price Transparency Bill Becomes Law: “California Governor Jerry Brown signed SB 17, a drug price transparency bill, into law…

…Brown said there is:

real evil when so many people are suffering so much from rising drug prices…The essence of this bill is pretty simple. Californians have a right to know why their medical costs are out of control, especially when the pharmaceutical profits are soaring…This measure is a step at bringing transparency, truth, exposure to a very important part of our lives — that is the cost of prescription drugs.

The new law will go into effect on January 1st, 2018.

The law… make drug price… more transparent… requiring drug companies to give health insurers and government health plans at least sixty days’ warning before prescription drug hikes that would exceed 16% over a two-year period. Drug companies will also have to provide reasons behind the price increases. Big Pharma strongly opposed the bill….

Drug prices have been skyrocketing over the past decade, and policymakers want to take action to reduce them. One of the major problems has been a great lack of information on drug prices, what contributes to their increase, and who is responsible for the harm to consumers. The passage of SB 17 shows that transparency bills are not doomed to defeat…

Should-Read: Jay Shambaugh et al.: Thirteen Facts About Wage Growth

Should-Read: Jay Shambaugh et al.: Thirteen Facts About Wage Growth: “Economic and policy changes are both important for the division of economic gains… http://www.hamiltonproject.org/assets/files/thirteen_facts_wage_growth.pdf

…We explore the roles of technological progress, globalization, and changing returns to education in driving some of these wage trends over the long run. We also examine declines in the rate of union membership and the real minimum wage, focusing on how these developments have affected the level and distribution of wages…

Must-Read: Danny Quah: When Open Societies Fail

Must-Read: Smart meditations by the extremely sharp Danny Quah. I find myself wishing that he would engage with Habermas on the “public sphere” here, somehow…

Danny Quah: When Open Societies Fail: “Why is Wikipedia mostly OK, but so many comments at the end of newspaper articles make you weep for humanity’s future – when both are open for everyone to write?…

…Jimmy Wales of Wikipedia spoke at Khazanah Megatrends Forum in Kuala Lumpur. He reflected on, among other things, Wikipedia’s openness of knowledge production, its usefulness to many, and its sustainability. About 18 months before then, Microsoft had shut down a failed experiment on openness and Artificial Intelligence: Tay, the online Twitterbot AI-enabled to learn from its interaction with users. In the event, Internet trolls ended up keying in so much venom-driven input that Tay came to spew hate, racism, and misogyny. These three story arcs — Wikipedia, Tay, online newspaper commentary — reflect generally on the workings of open societies.

Many observers find appealing the hypothesis that open, individual-oriented, bottom-up societies achieve outcomes that, even if not point-wise optimal, are dynamically robust and resilient. Since at least Popper and Friedman, this idea has been a plank of Western economic and political analysis. This hypothesis often informs prediction of eventual failure in varieties of Asian economic performance, from Singapore’s to China’s. Open societies might suffer shocks — financial crisis, Trump, Brexit — but they learn from mistakes and get back on track quickly. In contrast, authoritarian controlled systems are unsustainable and fragile — even if they might achieve in the short to medium term Disneyland-like success and economic performance….

Openness and individual empowerment are in themselves desirable and deserving of aspiration. The question here concerns not their intrinsic appeal; it is instead about their implications for systemic outcomes. Do all open systems, in fact, show robustness and resilience?…

For Wikipedia, the user community grew empowered as an emergent, self-organised, self-recognised entity. The word “emergent”… suggest[s]… the outturn can only be described at the level of the system…. By contrast, for online newspapers and Tay, no emergence occurred: The community instead remained a ragtag group of fiercely independent separate individuals; further, the system itself continued to reflect accurately the actions and aims of the… representative agent. No positive externalities were generated…. Levels of hierarchy are not inconsistent with open systems, despite the emotional appeal of naive flat one-person, one-vote representation…. Near-invisible editors do not need to be world-leading experts on the subject in question — indeed, they cannot be — but instead are just reliable repositories of style and sense…. For Wikipedia, that background authority elicited a voluntary social responsibility in the system’s participants. The successful outcome came with a surfacing of a duties-oriented thinking….

The failed outcomes produced no such social identity: if anything, individual participants remained firmly embedded in an individual rights-centric approach to engagement. Under a rights-centric system, participants should be allowed to do or say anything they want. They can say uplifting, constructive, informative things; or they can be destructive. Here, that freedom to choose produced chaos and disorder….

Just as traditional views about individual rationality are challenged by behavioral economics, so too long-held thinking needs to be re-examined regarding the success of open societies…. Societies might need to trade off between individual freedoms and social well-being. Some observers find that uncomfortable. But then dealing sensibly with tradeoffs is what economics is all about.

Should-Read: David Anderson: State Approaches to Handling CSR Uncertainty for 2018 Premiums

Should-Read: David Anderson: State Approaches to Handling CSR Uncertainty for 2018 Premiums: “The 2018 ACA Marketplace that begins on November 1, 2017…

…Much of the pricing variance will be a result of choices that states and insurers have made in response to the uncertainty over whether the federal government will continue to reimburse insurers for the Cost Sharing Reduction (CSR) subsidies that insurers are legally obligated to provide to qualified exchange enrollees. In the ACA Marketplace, enrollees choose among plans grouped in four “metal levels” defined by… the percentage of average medical costs the plan will cover. Bronze plans… 60 percent AV, Silver plans… 70 percent… Gold… 80… and Platinum… 90….

CSR is available only to low income enrollees, and only with Silver plans. For low income enrollees, CSR boosts AV from a baseline of 70 percent to:

  • 94 percent for enrollees with incomes up to 150 percent of the Federal Poverty Level (FPL)
  • 87 percent for enrollees with incomes between 150 and 200 percent FPL
  • 73 percent for enrollees with incomes between 200 and 250 percent FPL

At present, 57 percent of on-Marketplace enrollees access CSR, including over 80 percent of Silver plan enrollees.  Silver plans are priced, however, as if the AV is 70 percent for all enrollees. Under threat that the Trump administration will stop reimbursing CSR, or that the courts eventually will order the administration to stop payment if Congress fails to appropriate the funds, states and insurers must decide how to enable insurers to cover the cost of providing the richer CSR-boosted coverage. States and insurers have taken several approaches….

  • Assume CSR is paid in a timely manner
  • Assume CSR is not paid and load all costs onto plans at all metal levels.
  • Assume CSR is not paid and load all costs only to all Silver Plans
  • Assume CSR is not paid and load all costs only onto on-exchange Silver plans….

The distributional consequences of these different choices are significant and varied…

State Approaches to Handling CSR Uncertainty for 2018 Premiums Balloon Juice

Should-Read: Paul Krugman: Subsidies, Spite, and Supply Chains

Should-Read: Paul Krugman: Subsidies, Spite, and Supply Chains: “I’ve been fairly complacent about NAFTA’s fate…

…Not that I imagine that Trump, or for that matter any of his senior advisers, has any understanding of what NAFTA does or the foreign-policy implications of tearing it down. But I thought sheer interest-group pressure would keep the agreement mostly intact…. Breaking it up would be hugely disruptive, and the losers would include major industrial players who tend to have the ear of even Republican administrations. So I thought we’d likely get a few cosmetic changes to the agreement, allowing Trump to declare victory and walk away.

But look what just happened on health care. Never mind the millions who may lose coverage: Trump demonstrably doesn’t care about them. But his decision will also cost insurers and health care providers, the kind of people you might expect him to listen to, billions. And he did it anyway, evidently out of sheer spite…. So is it safe to assume that he won’t screw over much of U.S. manufacturing in the same way? At this point, the answer has to be “no.”… Reasonable people are worried, I think rightly, that Trump’s rage and spite might lead him to start a war. So why not worry that he’ll start a trade war instead (or as well)?

Must- and Should-Reads: October 15, 2017


Interesting Reads:

Should-Read: Bruce Bartlett: I helped create the GOP tax myth. Trump is wrong: Tax cuts don’t equal growth

Should-Read: Bruce Bartlett: I helped create the GOP tax myth. Trump is wrong: Tax cuts don’t equal growth: “Even if they had released a complete plan — not just the woefully incomplete nine-page outline released Wednesday…

…Republicans have failed to make a sound case that it’s time to cut taxes. Nor have they signaled that they’ll commit to a viable process…. The first version of the ’81 tax cut was introduced in 1977 and underwent thorough analysis by the CBO and other organizations, and was subject to comprehensive public hearings. The Tax Reform Act of 1986 grew out of a detailed Treasury study and took over two years to complete. Rushing through a half-baked tax plan… should be rejected out of hand. As Sen. John McCain (R-Ariz.) has repeatedly and correctly said, successful legislating requires a return to the “regular order.” That means a detailed proposal with proper revenue estimates and distribution tables from the Joint Committee on Taxation, hearings and analysis by the nation’s best tax experts, markups and amendments in the tax-writing committees, and an open process in the House of Representatives and Senate.

There are good arguments for a proper tax reform even if it won’t raise GDP growth. It may improve economic efficiency, administration and fairness. But getting from here to there requires heavy lifting that this Republican Congress has yet to demonstrate. If they again look for a quick, easy victory, they risk a replay of the Obamacare repeal fight that wasted so much time and yielded so little.

Should-Read: Luigi Iovino and Dmitriy Sergeyev: Quantitative Easing without Rational Expectations

Should-Read: Luigi Iovino and Dmitriy Sergeyev: Quantitative Easing without Rational Expectations
: “We study the effects of risky assets purchases financed by issuance of riskless debt by the government (quantitative easing) in a model without rational expectations…

…We use bounded rationality in the form of level-k thinking and the associated reflective equilibrium that converges to the rational expectations equilibrium in the limit. This equilibrium notion rationalizes the idea that it is difficult to change expectations about economic outcomes even if it is easy to shift expectations about the policy. Quantitative easing policy increases the price and production of risky assets in the reflective equilibrium, while it is neutral in the rational expectations equilibrium. In the extension of the model, we show that bounded rationality dampens the strength of the market segmentation channel of quantitative easing…

Should-Read: Geoffrey Pulham (2013): Why Are We Still Waiting for Natural Language Processing?

Should-Read: This piece by the interesting Geoffrey Pulham seems to start out non-optimally.

There is a difference between (1) true “AI” on the one hand and (2) successful voice/text interface to database search on the other. At the moment (2) is easy. And we should implement (2)—which requires that humans do a little bit of adjusting in order not to use “not”, for figuring out within which superset of results any particular “not” is asking for the complement is genuinely hard, and does require true or nearly-true “AI”.

Thus to solve Pulham’s problem, all you have to do is ask two queries: (i) “Which UK papers are part of the Murdoch empire?”; (ii) “What are the major UK papers?”; take the complement of (i) within (ii) and you immediately get a completely serviceable and useful answer to your question.

That you need to do two rather than one query is because Google has not set itself up to produce short lists as possible answers to (ii) and (i), and then subtract (i) from (ii), and that the reason that it has not done that is a hard AI problem rather than the brute-force-and-massive-ignorance word-frequency-plus-internet-attention that is Google shtick.

But what amazes me is that Google can get so close—not that “true AI” is really hard.

And maybe that is Pelham’s real point:

Geoffrey Pulham (2013): Why Are We Still Waiting for Natural Language Processing?: “Try typing this, or any question with roughly the same meaning, into the Google search box… http://www.chronicle.com/blogs/linguafranca/2013/05/09/natural-language-processing/

…Which UK papers are not part of the Murdoch empire?

Your results (and you could get identical ones by typing the same words in the reverse order) will contain an estimated two million or more pages about Rupert Murdoch and the newspapers owned by his News Corporation. Exactly the opposite of what you asked for. Putting quotes round the search string freezes the word order, but makes things worse: It calls not for the answer (which would be a list including The Daily Telegraph, the Daily Mail, the Daily Mirror, etc.) but for pages where the exact wording of the question can be found, and there probably aren’t any (except this post).

Machine answering of such a question calls for not just a database of information about newspapers but also natural language processing (NLP). I’ve been waiting for NLP to arrive for 30 years. Whatever happened?…

Three developments….Google bet on… simple keyword search… [plus] showing the most influential first…. There is scant need for a system that can parse “Are there lizards that do not have legs but are not snakes?” given that putting legless lizard in the Google search box gets you to various Web pages that answer the question immediately….

Speech-recognition systems have been able to take off and become really useful in interactive voice-driven telephone systems… the magic of a technique known as dialog design…. At a point where you have just been asked, “Are you calling from the phone you wish to ask about?” you are extremely likely to say either Yes or No, and it’s not too hard to differentiate those acoustically…. Prompting a bank customer with “Do you want to pay a bill or transfer funds between accounts?” considerably improves the chances of getting something with either “pay a bill” or “transfer funds” in it; and they sound very different…. Classifying noise bursts in a dialog context is way easier than recognizing continuous text….

Machine translation… calls for syntactic and semantic analysis of the source language, mapping source-language meanings to target-language meanings, and generating acceptable output…. What has emerged instead… is… pseudotranslation without analysis of grammar or meaning…. The trick: huge quantities of parallel texts combined with massive amounts of rapid statistical computation. The catch… output inevitably peppered with howlers…. We know that Google Translate has let us down before and we shouldn’t trust it. But with nowhere else to turn (we can’t all keep human translators on staff), we use it anyway. And it does prove useful… enough to constitute one more reason for not investing much in trying to get real NLP industrially developed and deployed.

NLP will come, I think; but when you take into account the ready availability of (1) Google search, and (2) speech-driven applications aided by dialog design, and (3) the statistical pseudotranslation briefly discussed above, the cumulative effect is enough to reduce the pressure to develop NLP, and will probably delay its arrival for another decade or so.

Should-Read: @delong @pseudoerasmus @leah_boustan: On Twitter: What high skilled jobs did the domestication of the horse eliminate?

Should-Read: @delong @pseudoerasmus @leah_boustan: On Twitter: What high skilled jobs did the domestication of the horse eliminate?: “@leahboustan: @pseudoerasmus @de1ong To me, robot has connotation of ‘artificial intelligence’ so CNC would be robot-like but assembly line would not be…

@pseudoerasmus: well the issue is mostly semantic but I see no reason to stress AI like aspects; for me anything which reduces L intensity is ‘robotic’

@leah_boustan: Yes, I suppose it is semantic. But, we already have a phrase for what you describe (“K that subs for low-skilled L”)

@pseudoerasmus: I prefer to stress the historical continuity. fear of robots continues a 250 year old theme of fear of biased tech changes reducing L inputs. The earliest machines did not eliminate low-skilled jobs. they eliminated (for that time) high-skilled jobs.

@de1ong: What high skilled jobs did the domestication of the horse eliminate?

Humans add value as:

  • B. strong backs
  • F. nimble fingers
  • M. microcontrollers
  • R. robots not yet invented
  • A. accountants
  • S. smilers
  • P. personal servitors
  • T. thinkers

(B) started to go out with the horse, & (F) with the IR. But that OK, because huge demand for M, R, A. But now fewer and fewer R—people doing the almost-automatable parts of high-throughput production processes. Robots taking M—every horse needed a human brain as a microcontroller, but that value added source is going away. AI is taking accounting in the broad sense.

That leaves us with T, P, and S—thinking, personal servitors, and smiling (i.e.: social engineering and “management” in the broadest sense.

“Unskilled/middle-skilled/high-skilled” just does not cut it. Instead, we have BFRMASPT, each with flavors that need more or less book-larnin &/or experiential feedback and practice. Plus the whole “unskilled”. An “unskilled” job is a job that can be done by the 50-watt supercomputer that is the human brain without the extensive and painful reprogramming needed to get it do things far from its default skill set (i.e., alphabetically sort 500 names). But every “unskilled” job is, if you go to the EECS departments, also classified as a currently-unsolvable AI problem. From teh comptuer’s perspeictve, these are not “low cognitive load” or “easy” tasks at all.

@pseudoerasmus: I agree 100% but yr mention of the horse is western-Eurasian-centric since east. Eurasia had different rate of animal-human substitution :-)

@de1ong: If you say that, you must follow it with references on the domestication of the elephant and the llama…