Should-Read: Luigi Iovino and Dmitriy Sergeyev: Quantitative Easing without Rational Expectations

Should-Read: Luigi Iovino and Dmitriy Sergeyev: Quantitative Easing without Rational Expectations
: “We study the effects of risky assets purchases financed by issuance of riskless debt by the government (quantitative easing) in a model without rational expectations…

…We use bounded rationality in the form of level-k thinking and the associated reflective equilibrium that converges to the rational expectations equilibrium in the limit. This equilibrium notion rationalizes the idea that it is difficult to change expectations about economic outcomes even if it is easy to shift expectations about the policy. Quantitative easing policy increases the price and production of risky assets in the reflective equilibrium, while it is neutral in the rational expectations equilibrium. In the extension of the model, we show that bounded rationality dampens the strength of the market segmentation channel of quantitative easing…

Should-Read: Geoffrey Pulham (2013): Why Are We Still Waiting for Natural Language Processing?

Should-Read: This piece by the interesting Geoffrey Pulham seems to start out non-optimally.

There is a difference between (1) true “AI” on the one hand and (2) successful voice/text interface to database search on the other. At the moment (2) is easy. And we should implement (2)—which requires that humans do a little bit of adjusting in order not to use “not”, for figuring out within which superset of results any particular “not” is asking for the complement is genuinely hard, and does require true or nearly-true “AI”.

Thus to solve Pulham’s problem, all you have to do is ask two queries: (i) “Which UK papers are part of the Murdoch empire?”; (ii) “What are the major UK papers?”; take the complement of (i) within (ii) and you immediately get a completely serviceable and useful answer to your question.

That you need to do two rather than one query is because Google has not set itself up to produce short lists as possible answers to (ii) and (i), and then subtract (i) from (ii), and that the reason that it has not done that is a hard AI problem rather than the brute-force-and-massive-ignorance word-frequency-plus-internet-attention that is Google shtick.

But what amazes me is that Google can get so close—not that “true AI” is really hard.

And maybe that is Pelham’s real point:

Geoffrey Pulham (2013): Why Are We Still Waiting for Natural Language Processing?: “Try typing this, or any question with roughly the same meaning, into the Google search box… http://www.chronicle.com/blogs/linguafranca/2013/05/09/natural-language-processing/

…Which UK papers are not part of the Murdoch empire?

Your results (and you could get identical ones by typing the same words in the reverse order) will contain an estimated two million or more pages about Rupert Murdoch and the newspapers owned by his News Corporation. Exactly the opposite of what you asked for. Putting quotes round the search string freezes the word order, but makes things worse: It calls not for the answer (which would be a list including The Daily Telegraph, the Daily Mail, the Daily Mirror, etc.) but for pages where the exact wording of the question can be found, and there probably aren’t any (except this post).

Machine answering of such a question calls for not just a database of information about newspapers but also natural language processing (NLP). I’ve been waiting for NLP to arrive for 30 years. Whatever happened?…

Three developments….Google bet on… simple keyword search… [plus] showing the most influential first…. There is scant need for a system that can parse “Are there lizards that do not have legs but are not snakes?” given that putting legless lizard in the Google search box gets you to various Web pages that answer the question immediately….

Speech-recognition systems have been able to take off and become really useful in interactive voice-driven telephone systems… the magic of a technique known as dialog design…. At a point where you have just been asked, “Are you calling from the phone you wish to ask about?” you are extremely likely to say either Yes or No, and it’s not too hard to differentiate those acoustically…. Prompting a bank customer with “Do you want to pay a bill or transfer funds between accounts?” considerably improves the chances of getting something with either “pay a bill” or “transfer funds” in it; and they sound very different…. Classifying noise bursts in a dialog context is way easier than recognizing continuous text….

Machine translation… calls for syntactic and semantic analysis of the source language, mapping source-language meanings to target-language meanings, and generating acceptable output…. What has emerged instead… is… pseudotranslation without analysis of grammar or meaning…. The trick: huge quantities of parallel texts combined with massive amounts of rapid statistical computation. The catch… output inevitably peppered with howlers…. We know that Google Translate has let us down before and we shouldn’t trust it. But with nowhere else to turn (we can’t all keep human translators on staff), we use it anyway. And it does prove useful… enough to constitute one more reason for not investing much in trying to get real NLP industrially developed and deployed.

NLP will come, I think; but when you take into account the ready availability of (1) Google search, and (2) speech-driven applications aided by dialog design, and (3) the statistical pseudotranslation briefly discussed above, the cumulative effect is enough to reduce the pressure to develop NLP, and will probably delay its arrival for another decade or so.

Should-Read: @delong @pseudoerasmus @leah_boustan: On Twitter: What high skilled jobs did the domestication of the horse eliminate?

Should-Read: @delong @pseudoerasmus @leah_boustan: On Twitter: What high skilled jobs did the domestication of the horse eliminate?: “@leahboustan: @pseudoerasmus @de1ong To me, robot has connotation of ‘artificial intelligence’ so CNC would be robot-like but assembly line would not be…

@pseudoerasmus: well the issue is mostly semantic but I see no reason to stress AI like aspects; for me anything which reduces L intensity is ‘robotic’

@leah_boustan: Yes, I suppose it is semantic. But, we already have a phrase for what you describe (“K that subs for low-skilled L”)

@pseudoerasmus: I prefer to stress the historical continuity. fear of robots continues a 250 year old theme of fear of biased tech changes reducing L inputs. The earliest machines did not eliminate low-skilled jobs. they eliminated (for that time) high-skilled jobs.

@de1ong: What high skilled jobs did the domestication of the horse eliminate?

Humans add value as:

  • B. strong backs
  • F. nimble fingers
  • M. microcontrollers
  • R. robots not yet invented
  • A. accountants
  • S. smilers
  • P. personal servitors
  • T. thinkers

(B) started to go out with the horse, & (F) with the IR. But that OK, because huge demand for M, R, A. But now fewer and fewer R—people doing the almost-automatable parts of high-throughput production processes. Robots taking M—every horse needed a human brain as a microcontroller, but that value added source is going away. AI is taking accounting in the broad sense.

That leaves us with T, P, and S—thinking, personal servitors, and smiling (i.e.: social engineering and “management” in the broadest sense.

“Unskilled/middle-skilled/high-skilled” just does not cut it. Instead, we have BFRMASPT, each with flavors that need more or less book-larnin &/or experiential feedback and practice. Plus the whole “unskilled”. An “unskilled” job is a job that can be done by the 50-watt supercomputer that is the human brain without the extensive and painful reprogramming needed to get it do things far from its default skill set (i.e., alphabetically sort 500 names). But every “unskilled” job is, if you go to the EECS departments, also classified as a currently-unsolvable AI problem. From teh comptuer’s perspeictve, these are not “low cognitive load” or “easy” tasks at all.

@pseudoerasmus: I agree 100% but yr mention of the horse is western-Eurasian-centric since east. Eurasia had different rate of animal-human substitution :-)

@de1ong: If you say that, you must follow it with references on the domestication of the elephant and the llama…

Should-Read: Paul Krugman: Lies, Lies, Lies, Lies, Lies, Lies, Lies, Lies, Lies, Lies

Should-Read: Paul Krugman: Lies, Lies, Lies, Lies, Lies, Lies, Lies, Lies, Lies, Lies: “Modern conservatives have been lying about taxes pretty much from the beginning of their movement…

…Made-up sob stories about family farms broken up to pay inheritance taxes, magical claims about self-financing tax cuts, and so on go all the way back to the 1970s. But the selling of tax cuts under Trump has taken things to a whole new level…. When I set out to make a list of the bigger lies, I thought there would be six or seven, and was surprised to come up with ten. So I thought it might be useful, both for myself and for others, to put together a crib sheet….

Lie #1: America is the most highly-taxed country in the world…. Lie #2: The estate tax is destroying farmers and truckers…. Lie #3: Taxation of pass-through entities is a burden on small business…. Lie #4: Cutting profits taxes really benefits workers…. Lie #5: Repatriating overseas profits will create jobs…. Lie #6: This is not a tax cut for the rich…. Lie #7: It’s a big tax cut for the middle class…. Lie #8: It won’t increase the deficit…. Lie #9: Cutting taxes will jump-start rapid growth…. Lie #10: Tax cuts will pay for themselves….

So there we are: ten big tax-cut lies. That was pretty exhausting, actually – and as I said, I’ve probably missed a few, and/or Trump will invent some new ones. But I hope this ends up being a useful reference….”

Should-Read: Simon Wren-Lewis: How Neoliberals weaponise the concept of an ideal market

Should-Read: Simon Wren-Lewis: How Neoliberals weaponise the concept of an ideal market: “I would tend to suggest…

…”neoliberalism is a political strategy promoting the interests of big money that utilises the economist’s ideal of a free market to promote and extend market activity and remove all ‘interference’ in the market than conflicts with these interests.” This replaces a definition based on following an idea ([Colin Crouch’s] market neoliberalism), by one of interests promoting an idea so long as it suits those interests. This alternative definition seems to fit two cases…. Large banks benefit hugely from an implicit subsidy provided by the state (being bailed out when things go wrong), but neoliberals do not worry too much about this form of state interference in the market (whereas economists do). Regulations on the other hand they do complain about. It is a very selective focus on market interference….

Executive pay… is always justified by neoliberals as being something determined by the free market, when obviously it is not. Yet if you pretend that there is a market in executives and salaries etc are set by that market and not the remuneration committees of firms, then you are being a good neoliberal by defending these salaries. This example is interesting because it involves defending one part of ‘big money’ (CEOs or some workers in finance) at the expense of another (shareholders). It is why I do not talk about the interests of capital in my definition.

Is this alternative definition simply negating the power of ideas and going back to good old interests? Only in part. Interests utilise an idea because the idea is a powerful persuasive tool. There is an obvious lesson for the left here. Because neoliberals promote the concept of an ideal market only when it suits them, so opposing neoliberalism does not necessarily mean opposing the concept of an ideal market. The left should utiliise the same concept to oppose monopoly power, for example. The idea of a free market is too powerful an idea to cede to the other side.

Should-Read: Martin Wolf: A political shadow looms over the world economy

Should-Read: I really wish that the FT would stop calling it “populism” and start calling it “fascism”. “Populism” is a set of policies—some good on net, some bad—to redistribute income downward. Fascism is something much uglier. Fascism is what we have. Nobody thinks Trump or his Republican enablers or their allies and fellow travelers elsewhere are interested in redistributing income downward:

Martin Wolf: A political shadow looms over the world economy: “Optimism about the global economy is tempered by fears of populism…

…promising simple solutions to complex problems… Brexit… Catalonia… above all, in the US, where the implications of Donald Trump’s election remain almost as obscure as they were on the day of his inauguration. Inevitably, the transformation of US policy is far and away the deepest worry. This might still amount to little more than sound and fury signifying nothing. But it is also far too early to be confident of that…. Vast tax cuts at a time of near full employment… reducing the external deficit through a series of negotiations, starting with Nafta. The aim of fixing an overall current account deficit through bilateral trade negotiations is not only intellectually incoherent, but clashes directly with its fiscal policies. It may be nonsense, but it could lead to the cumulative unravelling of the global trading system…

Should-Read: Ben Bernanke: Monetary Policy in a New Era

Should-Read: If only this Ben Bernanke (2017) (and Paul Krugman (1998)) could have had the ear of some central bank leader over 2006-2014!

Ben Bernanke: Monetary Policy in a New Era: “Outside of making a stronger case for proactive fiscal policies, there are two broad possibilities…

…Monetary policymakers could make greater use of new tools… both forward guidance and quantitative easing are potentially effective supplements to conventional rate cuts, and that concerns about adverse side effects (particularly in the case of quantitative easing) are overstated. These two tools can thus serve to ease the ZLB constraint in the future…. A second broad response to the problem is to modify the overall policy framework…. I propose… a “temporary price-level target” that kicks in only during periods in which rates are constrained by the ZLB…

Weekend reading, “racial health disparities, wealth inequality, and labor market tightness” edition

This is a weekly post we publish on Fridays with links to articles that touch on economic inequality and growth. The first section is a round-up of what Equitable Growth published this week, and the second is work we’re highlighting from elsewhere. We might not be the first to share these articles, but we hope by taking a look back at the whole week, we can put them in context.

Equitable Growth round-up

This week, Equitable Growth released three new working papers:

The first is by Darrick Hamilton, an associate professor of economics and urban policy at the New School for Social Research, who analyzes the role of racism and stigma in persistent racial health disparities. Hamilton also wrote a column on his research. He explains that racial health disparities persist for black Americans regardless of socioeconomic status and, paradoxically, often worsen with more education.

Jess Benhabib and Alberto Bisin, both professors of economics at New York University, wrote the other two papers, one with NYU Ph.D. Candidate Mi Luo. Both look at wealth distribution in the United States and why it’s so unequal. In a separate column, Nick Bunker digs into the papers’ main findings.

Bunker also has a new issue brief, which asks “Just how tight is the U.S. labor market?” As wage growth continues to be tepid, Bunker’s analysis shows that the labor market is not as tight as the low unemployment rate would have you believe.

Every month the U.S. Bureau of Labor Statistics releases data on hiring, firing, and other labor market flows from the Job Openings and Labor Turnover Survey, better known as JOLTs. We highlight a few key findings through graphs using data from the report.

In an op-ed for the Washington Business Journal, Heather Boushey argues that repealing and replacing D.C.’s paid family leave act won’t just harm workers, but businesses, and D.C.’s government and economy as well.

Links from around the web

In an interview published by ProMarket, Anat Admati, the George G.C. Parker Professor of Finance and Economics at the Graduate School of Business, Stanford University, argues that economists’ and governments’ failure to address the growing concentration of corporate power and rising inequality has had severe consequences for economic and political stability. [promarket]

Perry Stein writes about a new Georgetown University study that finds that Washington, D.C.’s growing economy is leaving the city’s longtime black residents behind, mirroring a trend in cities across the nation. The inequities, Perry writes, can be traced back to discriminatory practices that prevented black residents from participating in the economy. [washington post]

Tanvi Misra interviews MacArthur grant recipient and New York Times journalist, Nikole Hannah-Jones, about how the legacy of historic discriminatory practices is not the only factor in persistent racial segregation. Jones maintains that “there are policymakers who are making decisions right now that are maintaining segregation.”  [citylab]

In today’s economy, only the most educated Americans can afford to pick up and move for better opportunities, a serious problem in a dynamic economy. Alana Semuels writes about how, even as jobs decline in some regions of the United States, skyrocketing housing costs in the best-paying cities prevent lower-income workers from relocating. [the atlantic]

Ana Swanson and Jim Tankersley examine a new International Monetary Fund report warning governments that they risk undermining global economic growth by cutting taxes on the wealthy. [new york times]

Friday Figure

From “Post-racial rhetoric, racial health disparities, and health disparity consequences of stigma, stress, and racism,” by Darrick Hamilton.

Must-Read: Olivier Blanchard and Lawrence Summers: Rethinking macro stabilization: Back to the future

Must-Read: I read this as saying, in one respect: “the neoclassical synthesis, and the resulting decision by MIT Keynesians to focus on the errors of Cambridge Keynesians and to build bridges not to them but to Chicago monetarists was, in retrospect, a big mistake”. I remember Larry saying in, I think 1981, “there are a lots of careers to be made and knowledge to be gained by mathing up Keynes’s General Theory properly”. Yet, the honorable examples of Roger Farmer and a number of others notwithstanding, too much macro has instead been off chasing squirrels for two generations:

Olivier Blanchard and Lawrence Summers: Rethinking macro stabilization: Back to the future: “Lessons from past crises…

…The Great Depression: The economy can implode…. Need for aggressive policies…. Apparent success, from 1940 to the late 1960s. The stagflation of the 1970s: The Keynesian approach…. Think of fluctuations as “business cycles”…. With predictable policy rules, economy will be stable…. Apparent success, from the mid 1980s to the mid 2000s

The three main lessons we draw from this crisis…. Centrality of the financial system… nature of fluctuations… low rates (“secular stagnation”)… interact[ing] with the first two. One should add, but we leave it aside: The increasing salience of inequality (interacting with low growth)….

Think of events of last ten years: Runs… liquidity trap for nearly 10 years… remaining unemployment gaps… utput far below the pre-crisis trend in AEs. Business as usual? No: Economies do not self stabilize… implo[sion] hysteresis… need strong pro-active and reactive policies…. In many ways, “Back to the future” and the Keynesian revolution

Slight (but productive) tensions between the two authors…. Evolution or Revolution?

  • The case for Revolution
    • Financial crises very likely again. Poorly understood
    • Economies unstable. Non linearities essential
    • Secular stagnation here to stay.
    • Not amenable to VAR, DSGEs. Need new approaches
  • The case for Evolution
    • Models can be extended. Much wisdom to be kept
    • Non linearities mostly in “dark corners’’
    • Financial crises will remain rare events
    • Can be handled with the right combination of the 3 policies.

But common agreement on the need for change.

Should-Read: Tim Duy: Fed Is Ignoring Actual Inflation Data

Should-Read: Tim Duy: Fed Is Ignoring Actual Inflation Data: “Policy makers may be relying on the wrong model as they push for a December rate hike…

…Market participants should be aware of some fairly significant monetary policy dangers over the next year. One risk is that the Fed mistakenly adheres to a broken Phillips curve framework… possibly triggering the recession gradualism is meant to avoid. What concerns me most, however, is… stepping up the pace of rate increases should inflation accelerate. After all, they already tightened in advance of that quickening. I am worried that they will lose sight of the lags in monetary policy and, upon seeing inflation accelerate, conclude that their Phillips curve approach was correct all along and that they are falling behind the curve. This would almost certainly be a recipe for recession…