12:13:57 From Lita-NBER Staff : Welcome to the Summer Institute 2020 HC Workshop 12:27:11 From Matthew Notowidigdo : Thanks! 12:39:35 From heidi williams : Hi, everyone — just fyi we are live-streaming on YouTube so it probably makes sense to mute for now. We’ll get started in a few minutes. 12:48:09 From heidi williams to Lita-NBER Staff(Privately) : Lita, I’ll go ahead and get started, so you can take down that slide. Thanks! 12:53:07 From Sarah Miller : Hi everyone. Good to see you all! 12:53:32 From Shooshan Danagoulian : Good to see everyone! Very nice to get together in this setting. 12:56:47 From Amy Finkelstein : fun reading: http://www.princeton.edu/~dixitak/home/R0ForCovidRes.pdf 12:58:58 From Amanda : Peter and I are here in the chat box if folks have clarifying questions! 13:07:51 From Joe Doyle : I wonder if some plans are better for relatively healthy beneficiaries, while others are better for relatively sick patients (match quality) 13:09:27 From Peter Hull : Hi Joe -- great question. In the baseline framework we rule out meaningful unobserved treatment effect heterogeneity, but obviously we might think that's important in this setting given how important we think it is for hospital quality. We have extensions to allow for selection on gains in the paper's appendix -- the basic IV logic generalizes (especially the "fallback condition" which Jason will soon mention) 13:09:46 From Ben Handel : I like this slide a lot. Very cool. 13:10:37 From Jonathan Kolstad : Related to Joe’s question, what happens if you remove duals? I might think mortality effects of a plan are quite heterogenous overall but particularly among that population (both in getting in bad plans and in affecting outcomes) 13:10:47 From Peter Hull : We also run some robustness checks where we at least interact observational mortality with observables (age, dual status, etc) to see if that matters, and it turns out to not that much (again, in the paper appendix) 13:11:03 From Michael Esman Chernew : is plan 'plan', carrier, or contract? what happens to folk that end up in FFS? Is the disenrollment from MA same for those in terminated plans 13:11:57 From heidi williams : (Boring zoom logistics, not a comment) Just as a reminder, please mute yourself if you are not talking to avoid background noise. I am muting people as I notice as well. Thanks! 13:12:05 From Jonathan Kolstad : Doesn’t that surprise you that it is so robust given heterogeneity in mortality and how a plan might affect it? 13:12:11 From Peter Hull : @Jonathan also great question. We do that robustness check as well in the paper's appendix. Again, similar "forecast coefficient" though its possible there is such heterogeneity that gets washed out 13:12:15 From Maria Polyakova : And to follow up on Michael, how generalizable are results from plans that are terminated - can you rule out that there is something about termination - i.e. they kind of stop caring/working before they terminate, but expect to terminate. 13:12:27 From Amanda : @Mike, Sorry for the confusion. Contract ID-plan ID-county-year. In robustness checks (in the appendix), we kick out anyone who switches to FFS. Results are similar. 13:13:11 From Shooshan Danagoulian : It sounds like you look at termination of plans which are either well above or well above the non-terminated mortality rates. Do you make use of plans with middle-of-pack mortality rates? Can you conclude direction of mean reversion for these plans? 13:13:33 From Amanda : @Maria, we also show that the main effect of termination isn't that big. But that isn't strictly necessary for our strategy. That said, your question is really about the "fallback condition" that Jason will talk about in a minute 13:13:55 From Peter Hull : Hi Shooshan -- yes, we actually use all terminated / non-terminated plans (each point jason is showing is a decile of lagged obs mortality) but the intuition is more useful to give with extreme examples 13:15:13 From Ben Handel : Are you able to tie the mortality differences to any specific plan fundamentals? E.g. tighter formularies or more restrictive networks? I don't think you need that for this paper, already doing a lot, but would be interesting / potentially valuable. 13:15:14 From Jonathan Kolstad : Can you look at particular conditions in which HCCs incentivize plan interventions more or less. That is, there are some things that risk adjustment means they should do a lot of work on and other things that are also bad but probably not impacted as much. That would help focus on things plans are actually trying to do. 13:15:21 From Shooshan Danagoulian : Is the effect you estimate linear, then? Or is there a disproportionately greater for much worse plans? 13:15:43 From Amanda Kowalski : How should I think about the fallback condition in light of the main result that people do select plans based on mortality to some extent? 13:15:53 From Amanda : @Ben, we'll get there -- we can show that plans that spend more are better. But R^2 of these things is low. It's probably networks, but we can't test directly yet. Unsure if narrow networks or just "low quality" networks. 13:15:58 From Brad Shapiro : Upvoting Amanda K's question. 13:16:19 From Michael Geruso : Hi - What should we think about the fact that for both terminated and non terminated plans there is a positive slope in the balance figure? Is it correct that that implies that overall there is sorting of sick people to plans with worse observational mortality? Is this a bias we’d be concerned about in OLS effects - as a separate issue from whether there is balance between treated (in terminated plans) and not treated? The balance figure went quick, sorry if im misunderstanding.. 13:16:49 From Peter Hull : @Shooshan -- we define the forecast coefficient by a linear regression but in principle you could estimate more flexible specifiations. We haven't looked into that too much but its a good idea to explore 13:16:53 From Maria Polyakova : @ Amanda, thanks. One more question - do you have a sense of how the magnitude of your results compare to plan fixed effects or maybe brand f.e. that in a choice model with financial characteristics would in theory capture any differences in mortality that consumers re aware of. 13:17:00 From Amanda : @kolstad, good idea. We can try to do more on this front, though I'm a little worried about power 13:17:18 From Tim J. Layton : @amanda, i thought most of the cancelled plans were PFFS. Don't they not have networks? 13:17:20 From Peter Hull : @amanda/brad: the discrete choice model allows for unobservable selection 13:17:45 From Peter Hull : i.e. that sicker patients systematically choose different types of plans 13:18:09 From Amanda : @Maria, great question. We've looked a little bit at brands. In unreported regressions, we find that the largest insurers (Humana, United, and Blue plans) appear to supply higher quality plans. 13:18:25 From neale mahoney : Can you project on the plan effects on the average hospital effects of their networks. This would allow you to see how much of the plan effects are attributable to networks. 13:19:18 From Amanda : @Tim, we're not recovering \beta for any *given* plan, so the fact that many (but not all!) of the terminated plans are PFFS doesn't matter for estimating the reliability of observational mortality (\lambda) 13:19:24 From Amanda : (Happy to discuss more!) 13:19:27 From Peter Hull : @neale great question. Something we're working on. Network data is spotty though so it's a bit hard to get a precise estimate (let us know if you have ideas please!) 13:19:51 From Amanda : @neale, Historical network data (remember we need 2008-2011) is really tough. For three states, we are actually able to construct more detailed networks from the SID w/ info on Medicare Advantage. But this isn't really enough data to infer much about the role of networks more generally. 13:20:36 From Joe Doyle : How about constructing a network based on where you see beneficiaries receiving care, at least as a proxy that reflects the places beneficiaries go depending on their plan? 13:20:41 From Ben Handel : Can't you do something with claims data to get a somewhat noisy but still useful dataset on plan-specific networks? Doing that in a project I'm working on now. 13:20:46 From Douglas Staiger : Is the under-reaction to "true" (unobserved to us) mortality, or to the EB (posterior mean based on the data) prediction? 13:21:34 From Joshua Gottlieb : @Ben I don't think they have claims in this period for MA, but presumably hospital discharge data are more readily available & could suffice? 13:21:35 From neale mahoney : Thanks Amanda. Yes, I was thinking of using encounter data to constructed the weighted average. Not sure if there is encounter data for this time period, though. 13:21:58 From Leemore Dafny : The policy simulation - remove lowest-quality plans - is interesting. Can you reliably identify these plans, i.e. is observable mortality highly correlated over time, or would you do better by explicitly limiting participation based on factors that appear related (e.g. plan size after a few years allowed for ramp-up)? 13:22:05 From Amanda : @ben, in addition to more network data, we're hoping to use physician identifiers in Part D data to try to construct networks of physicians associated with different plans and leverage the encounter data 13:22:06 From Peter Hull : @doug to the "true" effects. We get this by doing a simple berry inversion of a discrete choice model and plugging the resulting mean utility into the forecast IV framework 13:22:54 From Douglas Staiger : That sounds like you are using the true. So if people don't observe this, should we be surprised that they under-react to it? 13:23:01 From Peter Hull : @mike G (sorry if Amanda answered and I missed it) the positive slope in the balance check means that observably sicker patients tend to select to higher observational mortality added. We're not using that variation in our framework, but it is an interesting finding 13:23:17 From Ben Sommers : Given how limited most consumer response is to quality data that is publicly available and easily interpretable, are we surprised that these mortality differences wouldn’t show up as strong differences in WTP? 13:23:20 From Ellen Meara : Voting to hear more about @DougStaiger question and follow up 13:23:47 From Amanda : @Leemore, agreed there are lots of issues w implementation! (Our goal was to echo the Chetty et al simulation.) Worth thinking through! 13:23:55 From Michael Geruso : Thanks - Im having hard time squaring that selection with the fact that the forecast coefficient is one. Is there intuition? 13:23:59 From Peter Hull : @doug personally, i'm not terribly surprised that consumers aren't so aware of these effects, especially since star ratings do not contain mortality information (in contrast to e.g. Hospital compare!) 13:24:51 From Peter Hull : (relates to @ben's question too!) 13:24:54 From Amanda : @doug and @ellen, I'm also not surprised. To Matt's point, we may want to pilot info on observational mortality w our parents first 13:24:56 From Jonathan Kolstad : Removing lowest quality plans means removing brand by county. That might be easier or harder to implement depending. Am I correct in thinking about it that way? 13:25:01 From Douglas Staiger : This may not be a big deal, if there isn't much shrinkage (e.g. you estimate the plan effect very precisely). I agree that I'm not surprised -- and it probably reflects poor info among consumers (e.g. if you provided these estimates people would react!) 13:25:23 From Peter Hull : @doug there's indeed not much shrinkage here -- the average shrinkage factor is super close to one 13:25:30 From Amanda : @kolstad, totally agree (w you and Leemore) that implementation is tough -- more of a thought exercise than a policy proposal 13:25:45 From Michael Esman Chernew : how stable are plan specific mortality estimates over time. The figure on consistency over time, suggests very high persistence. Am I interpreting that correctly? 13:26:03 From Peter Hull : @mike we are trying to estimate the forecast coefficient while allowing for non-random selection (that's what the fallback condition helps with). 13:26:09 From Amanda : @Mike, the plan observational mortality estimates are time-invariant 13:26:54 From Ben Sommers : But to Mike’s point - can you split your sample time period in half, and see how stable those plan observational estimates are? 13:27:03 From Ben Sommers : across time periods, that is 13:27:09 From Jonathan Kolstad : On removing plans, though, I can imagine a version in which sufficiently low performance in a county specific option leads to either removal or a big hit to rates (effective removal) that seems plausible. More so than saying to United you’re out of MA entirely for a year 13:27:19 From Amanda : @Ben, we can do that! 13:27:22 From Michael Esman Chernew : I was going where ben went, but he types quicker 13:27:22 From Peter Hull : @ben we haven't tried that yet. We probably should 13:27:56 From Ben Sommers : I’m adding that to my CV - “I type faster than Mike Chernew” 13:28:02 From Michael Esman Chernew : I ask because many policy responses require stability. 13:28:08 From Leemore Dafny : But can you speak faster than Mike or Jon? 13:28:12 From Ben Sommers : nope. 13:28:42 From Douglas Staiger : Good point by Matt. uncertainty on lambda leaves uncertainty about the policy impacts. bias means we shouldn't use the magnitude of the effects seriously -- they are poorly calibrated, and need to be multiplied by lambda. 13:29:03 From Tim J. Layton : Am I right here that a violation of the identifying assumption would be that mortality is trending differentially in canceled high mortality plans and non-canceled high mortality plans? Is there some kind of placebo or something that you use to rule that out? Or is that just a fundamental assumption you have to make? 13:29:05 From Peter Hull : Yes, good point! 13:29:13 From Peter Hull : (re: matt/doug) 13:30:01 From Peter Hull : though i'd note the Rothstein paper is, IMO, a bit of an outlier in terms of its precision of a forecast coef estimate (as well as missing the fallback condition....) Many teacher VAM validation papers also have big SEs :) 13:30:24 From Amanda : @tim, you're right. Jason didn't show some more diff-in-diff style pictures (that might be what you have in mind), but they're in the draft and backup slides 13:30:29 From Peter Hull : @Tim that would be a concern. We have a diff-in-diff type of validation on trends in the paper 13:30:45 From Tim J. Layton : great. i'll check it out. thanks! 13:33:04 From Ben Handel : True. But there are interim measures of health / health care that could provide linkages between plan-specific effects and noisy measures like mortality. 13:33:26 From Peter Hull : Yes. Those would be helpful to collect here! 13:34:02 From Peter Hull : fwiw, the lack of a (spooky-sounding) "lagged mortality" measure also will generally lead to large SEs in the forecast IV 13:35:53 From Peter Hull : one point on the "omnibus test" (Angrist et al. 2017!) -- you can sort of see it from the quality of the line of best fit through the decile plots Jason was showing. The high R^2 from that plot relates exactly to that kind of overidentification test 13:36:19 From Douglas Staiger : in the teacher VA lit, this comes down to whether there are remaining teacher effects after controlling for EB estimates of VA -- so forecast unbiased, but for any given plan there are systematic errors. 13:36:34 From Joshua Gottlieb : How well does residential zip code predict mortality? If there's enough variation, maybe you could get an average lagged-mortality from the exact locations where people live 13:36:50 From Michael Esman Chernew : my concern w/ the network story (which I yend to like) is that Zacks work would suggest referrals depend on PCPs and I believe plan switching has much less PCP switching. so we need a story of PCPs changing referrals based on network inclusion, which is certainly possible, but O would have thought I would of heard of that more. I think a story of generosity of formulary may be more plausible and seem consistent with the slide on mechanisms, but it went by quickly. essentially the OOP may be both the risk and mortality effects 13:37:28 From Amanda : @josh, good idea! 13:38:05 From Amanda : @Mike, totally agree. On PCPs, following Yev and Austin Frakt and friends, we're going to use physician identifiers in Part D data to try to construct networks of physicians associated with different plans, then use carrier data to try to construct mortality rates for physicians and aggregate these to the plan level. 13:38:40 From Amanda : We look at donut hole coverage and find that donut hole coverage is, in fact, correlated with \beta, the true causal effects 13:42:27 From heidi williams : Just fyi: I believe the co-hosts (Amitabh, Amy, Ben, Neale, and I) can’t raise our hands. So let me “raise my hand” here and say that I would love to have Doug Staiger clarify some of his comments on relating this to the teacher value-added literature. 13:42:49 From Brad Shapiro : folks on youtube can't see the chat, so it's probably still worthwhile to bring up interesting points out loud, even if they already came up in the chat. 13:43:02 From Amanda : @Pragya, not sure they're directly comparable. I'll let Sarah or Laura chime in on treatment effect heterogeneity by age, but we're looking at an older group of beneficiaries. We've thought a little bit about whether the hospital effects "add up" to the plan effects 13:48:54 From Sarah Miller : I don't see Pragya's comment for some reason, but in our Medicaid paper we found larger mortality effects for older ages. (But everyone under 65 in our setting) 13:51:34 From Ben Sommers : Re: Sarah’s comment, in my paper w/Sharon Long & Kate Baicker- our mortality results after Massachusetts reform were larger in absolute terms for 35-64 year olds, but larger in relative terms for 20-34 (but with wider CIs for the latter) 13:52:04 From Ben Sommers : And also no data on those > 65 13:54:52 From heidi williams : Amanda Kowalski has her hand raised. 13:54:53 From Peter Hull : thanks! 13:57:00 From Matthew Notowidigdo : (Not a paper yet!) 13:57:46 From Matthew Notowidigdo : Agree! 13:58:05 From neale mahoney : I have a question but we might not have time 14:00:08 From Brad Shapiro : "If you like your plan you can keep it" ;) 14:00:22 From Peter Hull : Neale -- let's chat! 14:00:31 From Peter Hull : Thanks everyone for the excellent questions!!!! 14:00:45 From Amanda : Yes, thanks everyone! 14:00:59 From heidi williams : Just to give everyone a sense, we have ~300 people logged in to view on YouTube right now. 14:09:38 From Ellie Prager : In practice, should we think of consideration-on-observables as arising bc people (or their kids/insurance pickers) apply a heuristic such as "I don't want a deductible above $X"? Or do you have a different practical mechanism in mind? 14:10:13 From Ben Handel : @Ellie, I will try to speak to your question during discussion. I think in practice in these models it is not clear. 14:10:38 From Ben Handel : And to be clear, that can be ok that it is not clear, but it can matter also. 14:11:29 From Maria Polyakova : Is the key conceptual difference to models where there is a first stage of whether consumers pay attention vs. not that in this case paying attention is a function of plan attributes? Or is there something else? 14:12:27 From Ben Handel : @Maria: key difference here is that this paper is about active choice, i.e. non-inertial choice. Otherwise, would be quite similar to inattention model, e.g. in Ho et al. (2017) 14:13:33 From Maria Polyakova : @Ben, makes sense, but seems like that will be different data generating processes that get you similar observational data, so not clear how to figure out which of the two worlds we are in? 14:13:38 From Ben Handel : PS: Happy to answer questions while Maura talks. But, my answers don't reflect what she might say at all! 14:13:42 From Joachim Winter : What are the implications if firms can actively target consumers and thus influence their consideration sets? 14:14:47 From Keith Marzilli Ericson : This paper assumes expected utility over wealth, and shows choice of low deductible plans can be rationalized by consumers not considering high deductibles. How would your model be able to distinguish not considering high deductible plans versus not preferring (but indeed considering) high deductible plans? 14:14:48 From Ben Handel : I think targeting of firms here is generally ok for mode if it just brings plans into consideration set. One other implication is that structural assumptions that plans are independently in consideration sets would be violated potentially. 14:15:14 From Ben Handel : @Keith: I'm going to talk about that in discussion. I think it can progress on that dimension. 14:15:29 From Maria Polyakova : @ Joachim and @ Ben And similarly, the homogeneity may be also then violated if different consumers are targeted differently 14:15:53 From David Slusky : What about household level decisions if both adults are on Medicare? Are couples using the same metrics to make decisions? Are they doing different things that together represent a reasonable level of risk aversion? 14:16:33 From Maria Polyakova : @David, I don't think there is admin data on Part D that links household decisions, so as far as I know we don't really know.. 14:16:43 From Ben Handel : Yes, @Maria. That is a really good point. I agree. 14:16:55 From Ellie Prager : Do these high "firm FEs" correspond to most aggressive advertisers? (relates to Joachim's point) 14:17:28 From David Slusky : What about brokers? Are they active in some markets vs. others? 14:17:30 From Ben Handel : @Ellie. Unclear. It relates to Keith's question. Preferences or attention due to advertising? 14:17:40 From Emily Oster : I had a similar question about advertisers. Or maybe people are worried about less common plans, for example worrying they might just shut down? 14:18:22 From neale mahoney : Thanks everyone. I’m trying to keep track of these to tee up the Q&A 14:18:26 From Brad Shapiro : @Ellie & Joachim & Emily - will discuss more tomorrow, but advertising, while potentially a confound, very rarely has high explanatory power in explaining choices, and typically has very low effects. I'm not super concerned that advertising is throwing everything off here. My prior is it is much less important relative to the other forces. 14:19:28 From Ellie Prager : @Brad I don 14:19:45 From Ben Handel : Yeah, I think about advertising and brand effects as linked, so it depends how you view those as related or not 14:19:55 From Joseph P. Newhouse : Amanda Starc has a paper showing a delta for the United/AARP plan and for Mutual of Omaha for paying high commissions 14:20:09 From Ellie Prager : @Brad I don't think it would be a "problem" for the paper if advertising did affect consideration. The insurer FEs would soak it up. But looking forward to learning more tomorrow! 14:20:46 From Emily Oster : Agreed with @ellie. I think my question is how to think about the firm effects given that they drive inefficiencies. 14:20:55 From Emily Oster : (at least that is what I think this is saying) 14:21:12 From Ellie Prager : @Emily bingo, totally agree 14:21:31 From Shooshan Danagoulian : Are some of these dimensions of choice more important/policy sensitive versus others? If so, does it make sense to evaluate all of these limited considerations on equal footing? 14:21:40 From Victoria Marone : Could you parse out what portion of the $226 increase in consumer CE is coming from ‘risk protection’ versus ‘everything else’ contained in the utility function? 14:21:42 From Shooshan Danagoulian : @Neale 14:23:11 From neale mahoney : @Shooshan, I’ve flagged it. 14:23:14 From Haizhen Lin : I'd like to know more about the identification of the choice set because the five plan attributes used seem to affect the utility as well? 14:23:27 From Shooshan Danagoulian : Thanks @Neale. 14:28:07 From Daria Pelech : In some years, CMS had a feature that would let you enter your drugs and then recommend plans to you. I would how that feature might affect results. 14:29:34 From Shooshan Danagoulian : @Daria This paper uses 2010 data only. Did the CMS website exist at that point? I have certainly navigated it in recent years. 14:29:59 From Joseph P. Newhouse : yes the CMS aid existed 14:30:57 From Daria Pelech : @Shooshan — I don’t know. I also don’t really know how this would affect consideration sets because one would assume/hope that a CMS tool would not steer people to dominated plans. But it could certainly limit what you consider. @Joe — thanks! 14:31:57 From Shooshan Danagoulian : @Daria But if Maura is using a single year, it should not introduce heterogeneity. But, if it did not exist then, the effects she estimate may be attenuated with the introduction of CMS aid. 14:32:30 From Shooshan Danagoulian : @Joe Thank you! I am under the impression that the website was launched a couple of years after Part D started. But I am not certain. 14:34:49 From Colleen Carey : I agree with Joe that the CMS Plan Finder website was definitely available in 2010. 14:35:22 From Michael Esman Chernew : Sorry I missed the beginning, but what if brand is really proxying for unobserved formulary details 14:35:42 From neale mahoney : @Ben, you have 30 seconds 14:37:45 From Joseph P. Newhouse : it may be a relevant choice to use a broker vs not 14:38:14 From Colleen Carey : @Mike, she follows Ho et al. and others in computing expenditure for realized drug demand in all plans, as well as a variance term that captures how this plan performs across for all individuals in a given "cell" of similar forecasted utilization 14:38:54 From Colleen Carey : @Mike, but one could hypothesize that individuals "trust" a brand to offer good coverage no matter what drugs are demanded, which could be a big source of plan fixed effects 14:40:24 From Michael Esman Chernew : yes. brand measures both. but the weight on each has welfare and policy implications 14:45:30 From Ellie Prager : Brad go for it, my audio is fuzzy 14:50:43 From David Slusky : @Maura - thank you. 14:50:56 From David Slusky : Paper is Gruber, Handel, Kina, Kolstad 14:51:01 From David Slusky : See https://www.ehealthecon.org/pdfs/Gruber_etal_2020.pdf 14:51:08 From David Slusky : and https://www.youtube.com/watch?v=95E201M63lE&feature=youtu.be 14:53:34 From Justin Sydnor : @Maura -- does the framework make predictions about how "mistakes" should vary with choice set size? (Some of my work and also stuff by Abaluck and Gruber suggests mistakes are very prevalent even in very small choice sets). 14:55:28 From Matthew Notowidigdo : I love that paper! (Go UChicago!) 14:55:58 From Wes : @Matt Me too! Gorgeous paper 15:00:45 From Maria Polyakova : And this time we have a bunch of co-authors, so wil try answer your questions in real time! 15:04:27 From Matthew Notowidigdo : That went fast; do doctors organize themselves into pass-throughs and do you measure pass-through income? 15:04:55 From Maria Polyakova : Excellent question. They frequently do get incomes from S-corps, but not always 15:04:59 From Amitabh Chandra : Just so I understand: if I’m a doc and I own a CT scanner and get the facility fee from owning it, you won’t count these fees because they’re income but not earnings? 15:05:09 From Amitabh Chandra : Sorry for not keeping up. 15:05:10 From Maria Polyakova : We include pass-through income into what we call "business income" 15:05:20 From Matthew Notowidigdo : Got it; thanks! Amazing data 15:05:59 From Emily Oster : I thought the answer to @amitabh was that this WOULD be included in earnings 15:06:03 From David Slusky : @Amitabh I would think that would be in AGI 15:06:04 From Emily Oster : or the total measure of earnings anyway 15:06:10 From David Slusky : But I can't type as fast as @Emily 15:06:16 From Maria Polyakova : @Amitabh - depends on how your ownership is organized. If you are an employee, you get W2 from your hospital. If you own it in the sense of having an S-corp or sole prop, then you get profit distirbuiton 15:06:17 From Amy Finkelstein : is this household or individual income? 15:06:45 From Maria Polyakova : @Emily, yes, it would be included unless the doc is only salaries employee 15:06:59 From Hugh Shiplett : @Amy W-2 is individual, while AGI and business income are at the tax unit level. 15:07:11 From Maria Polyakova : @Amy - individual in most graphs except for national percentiles, which are AGI-based (i.e. household) 15:07:12 From Emily Oster : North Dakota! 15:07:25 From David Slusky : And Kansas! 15:07:52 From Kevin Rinz : Anyone who want to play with the CZ-specialty data can get it here: https://kevinrinz.github.io/data.html 15:07:54 From Ellie Prager : Can anecdotally confirm doctors get headhunted in ND and Alaska. Is Alaska in the data too? 15:07:54 From Amitabh Chandra : I understand now— this is really cool data. 15:08:26 From Shooshan Danagoulian : @Joshua Generally, physicians believe that they must pay a premium to live in a desirable metropolitan area. Why not the same for lawyers? Is it because their skill is more valuable in metropolitan areas? 15:08:42 From Kevin Rinz : @Ellie Alaska is in the data, just not the map 15:09:01 From Ashley Swanson : Wow @ this data. Thanks for the public good! 15:09:25 From Hugh Shiplett : @Ellie AK is in the data, just not the map. Incomes in AK are similar to the Dakotas. 15:09:34 From Maria Polyakova : @Shooshan, good question why there may be differences. Lots of reasons. We speculate it is less about differences in preferneces or valuation of amenities and more about differences in things like market power, etc. 15:10:19 From Fiona Scott Morton : A lawyer in DC can have a high marginal product. A doctor in DC is saving the same lives - not diff unless you have a special weight for those people! 15:10:29 From Ellie Prager : @Shooshan+Maria, I would think lawyers ARE more productive in NYC (relative to doctors) because of agglomeration economies. Lawyers in South Dakota don't have Goldman Sachs as a client 15:10:30 From Shooshan Danagoulian : @Maria Thank you! That, I think, is what I mean by "skill more value" . 15:10:41 From Shooshan Danagoulian : @Fiona I agree. 15:10:51 From Maria Polyakova : @Fiona, yes! Although there is also some nuance about how many patients per physician you may have in different places. 15:10:58 From Ellie Prager : (can't type as fast as Fiona, alas) 15:11:05 From Colleen Carey : Where do we think the half that was not passed-through to phys income went? It's a fee bump to physician payments. 15:11:23 From Shooshan Danagoulian : @Ellie Yes, they are where the rent is. Doctors treat a geographically homogeneous population. 15:11:51 From Maria Polyakova : @Colleen, as you see self-employment pass-through is much higher, suggesting that for e.g. not self-employed, the rest could go to "practice" / overhead, etc. 15:12:58 From David Cutler : Do you have a way to match earnings for other people in the practice to the physician? Do nurses get paid more when docs get a pay bump? 15:13:16 From Emily Oster : I want to upvote @Davids question 15:13:16 From Ellie Prager : Love these concentration plots 15:13:34 From David Slusky : I also upvote @DavidCutler's question 15:13:45 From Kevin Rinz : @David et al, To the extent that they share an EIN, yes, we can do that 15:13:46 From Maria Polyakova : @David, great question. I think yes based on EINs and for some people we may identify professions from ACS. We haven't looked into that yet thogh 15:14:06 From Ben Handel : I always upvote David Cutler's questions, without even knowing what they are. 15:14:19 From heidi williams : @DavidCutler: I will call on you to share that question with YouTube. 15:15:22 From Michael Esman Chernew : I missed the measure of the fee bump effect. It it a price, revenue or GE effect (if bump affects volume (or contemporaneous things) affect income, is that in the denominator for the share of bump the docs got 15:15:55 From Amitabh Chandra : Related to David Cutler’s question— do docs employed at hospitals with large margins (operating margins) earn more? Or docs employed at hospitals with low admission thresholds? 15:16:13 From David Slusky : Given that PCPs normally make less than specialists, how much did this reduce the inequality/spread in physician earnings? 15:16:55 From David Slusky : Also, what about effects on fertility, given you can see the number of departments on the 1040? 15:17:00 From Shooshan Danagoulian : @Joshua Physician training is never really complete. There is a substantial continued education credits requirement. 15:17:07 From Maria Polyakova : @Amitabh, nice point! We don't have hospital info here directly, but something to look into since we do have EINs (but not sure there is a clean way to convert that to hospitals) 15:17:41 From David Cutler : TINs can be linked to hospitals with an ownership file. 15:17:48 From Maria Polyakova : @David, we haven't looked at fertility in the scope here, but that's something for future work for sure 15:17:56 From Victoria Udalova : @Amitabh and @Maria - we can get industry from ACS for a subsample 15:18:06 From Colleen Carey : The original event study had an increase in 2013 (first year of fee bump), with continued increase in 2014. 2014 insurance expansions plausibly differentially increased income of PCPs relative to specialists? 15:18:10 From Ellie Prager : @Maria yes that would be tough. Many docs who are employed by hospitals for practical purposes are technically employed by the physician organization attached to that hospital. Not sure if David's crosswalk would link them 15:18:10 From Tim J. Layton : i may have missed this, but couldn't a lot of the heterogeneity in "pass through" to the fee bump just be variation in % of patients on Medicaid? 15:18:13 From Leila Agha : are there any measures of selectivity/“quality” of lawyers vs doctors? do they come from a comparable part of the “ability” distribution? trying to understand to what extent mean/median lawyer is a good way to understand the outside option of mean/median doc 15:18:26 From Ellie Prager : Upvoting Tim's Q 15:18:30 From Shooshan Danagoulian : Upvote @Leila's question. 15:18:41 From Brad Shapiro : Upvote @Tim as well 15:19:25 From Daria Pelech : Echoing Tim’s question — those docs primarily serving Medicaid patients will be on the lower end of the distribution. And to David S’s other question, does this compress the income distribution of all physicians? 15:19:28 From Maria Polyakova : @Colleen, good point will have to think about it 15:19:46 From Adam Sacarny : To Tim’s question, I think it would help to know more about where Government physicians are employed. If it’s VA or DHA facilities, then we shouldn’t expect pass through. 15:19:55 From David Cutler : Seems like a super high return to delaying income at a period in time when MUc is very high (early to mid-30s). 15:21:11 From Ellie Prager : There is definitely a lot of selection on ability into the longer-training and higher-earning specialties. Not a "problem" for you but suggests caution in interpreting training vs. earnings plot on slide 31 15:21:11 From Victoria Udalova : @Adam we identify government physicians from ACS and don't have a way to know which government facilities are included specifically 15:21:23 From Michael Esman Chernew : THe key question of course is the elasticity of phys supply and specialty choice to income changes. I have lost how the equilibrium responds to this 15:21:30 From Maria Polyakova : @Tim, I cannot read or type as fast, so may not be understanding your question, but you mean % of patients on Medicaid per physician? These are scaled in per/physician/patient terms if that makes sense? 15:21:42 From Maria Polyakova : And using geo variation in fee bump. 15:21:54 From Maria Polyakova : Or are you thinking systematic differences in self-employed vs. not? 15:21:58 From David Slusky : What about looking at a subset of lawyers? I worry that including all of them is not as comparable to physicians. You could do lawyers who filed taxes from ZIP codes near the top 10 law schools or something like that. 15:22:13 From Adam Sacarny : @Victoria thanks - if you can distinguish federal vs. state vs. local that would get you almost the whole way there, but now I’m revealing I’ve never used the ACS in this way! 15:22:14 From Tim J. Layton : yes. i just mean variation in whether self-employed vs. others in terms of their Medicaid patient load. same for HHI, size of group, etc. 15:22:29 From Ellie Prager : @Slusky's idea is awesome 15:22:31 From Tim J. Layton : sorry. that didn't come out right 15:22:32 From Maria Polyakova : @Leila, we would love to say more about ability distribution, but are not there yet. 15:22:44 From Tim J. Layton : variation in % of patients on Medicaid across the different heterogeneity groups 15:22:51 From Michael Esman Chernew : but what if the bump changes the number of patients 15:22:56 From Leila Agha : @maria et al. love that graph at the end showing how much could be saved in aggregate by bringing down doc salaries 15:22:57 From Amitabh Chandra : How do I invest in Arbormetrix? 15:23:12 From Maria Polyakova : @David, yes! Very high return to delaying income 15:23:14 From Ben Handel : Upvote investing in ArborMetrix 15:24:04 From Shooshan Danagoulian : @Joshua Very interesting exploration of the data 15:24:05 From Maria Polyakova : @David, I am not quite sure why we would think that only lawyers in top 10 schools are comparable to an average physician?! 15:24:14 From Daria Pelech : @Mike, the evidence I have see is that the docs that take Medicaid patients take more of them, not that docs who don’t start taking them 15:24:17 From Daria Pelech : seen* 15:24:29 From Shooshan Danagoulian : @Maria @David I think this goes to @Leila's question about ability and selection. 15:24:37 From Ellie Prager : @Maria but you could do the same to find docs from top 10 med schools 15:25:24 From Maria Polyakova : @Tim, something for us to think about more carefully. My sense is that I doubt the distribution of Medicaid patients within regions across heterogeneity groups would follow exactly these patterns, but definitely something to think/document more carefully. Thanks! 15:25:40 From David Slusky : I think that hours and length of training are much longer much farther down the physician school rank. Whereas for lawyers pretty quickly you drop out of big law and wind up as a lower earning less hard working country lawyer 15:25:51 From David Slusky : *medical school rank* 15:25:56 From Maria Polyakova : @Mike, yes, changing number of patients would be included in this reduced form coefficient 15:26:04 From Pierre Azoulay : Would be interesting to see the earnings profile for physicians who also do research. 15:26:53 From Amy Finkelstein : is he saying that busincome is generially undererported in the ACS or esp so by docs? 15:27:26 From Maria Polyakova : I think his point is more general, @Amy. But likely particularly bad for docs, lawyers, etc. given the mishmash of income sources 15:28:16 From Maria Polyakova : @Pierre, yes! My intuition on the fly is they are more likely to have only W2 income at AMCs and hence probably not at the very top echelons of earners, as that comes from business income 15:28:48 From Ashley Swanson : How much have you played with gender differences? 15:28:51 From Pierre Azoulay : Doug, what is the pbm in the AMA Masterfile you are alluding to? 15:28:51 From Maria Polyakova : @Leila, thanks! 15:29:10 From David Slusky : upvoting @ashely's question. 15:30:39 From Ashley Swanson : How do different percentiles of the M/F income dists differ b/c of specialty choice, hours worked (at various points in life cycle), business income, etc.? 15:30:39 From Maria Polyakova : @Ashley, a lot! They are in line with all intuition you may have. Took it out in this draft, since beyond the facts, we didn't have much to say about why these differences exist. 15:30:53 From Victoria Udalova : NPPES data is publicly available and can be used to calculate number of physicians by specialty: https://download.cms.gov/nppes/NPI_Files.html 15:31:50 From Kevin Rinz : @Ashley We’ve done some descriptive stuff with gender. Most striking (though not shocking) thing to me is that men and women make the same amount during training, both start out on a steep trajectory in early 30s, but then women level off much earlier 15:31:58 From Maria Polyakova : @Ashley, there are still large gender differences, conditional on specialty, hours, etc., but specialty and hours also account for a substantial portion of differences 15:33:25 From Tal Gross : I'm curious how the internal rate of return to becoming a physician compares to the internal rate of return of becoming a lawyer. If there's a big difference, then one explanation (among others) is that the AMA acts as a monopolist, effectively raising physicians' returns... 15:33:28 From Colleen Carey : A Q regarding measurement of training duration: both law and medicine have many years of "on the job training". What counts as training for both? And how did you form your measurements of years of training? 15:33:29 From Ashley Swanson : Thanks @Kevin and @Maria. Maybe another paper! 15:34:24 From Matthew Notowidigdo : I like Tal’s suggestion for life-cycle IRR (like authors do here: https://www.econstor.eu/bitstream/10419/101873/1/dp8316.pdf) 15:34:27 From Jonathan Skinner : A high share of "entrepreneurial" physicians in FL :) 15:34:46 From Maria Polyakova : @Colleen, good question. We have played with different measures. Our preferred specification is empirical, based on when we see significant jumps in earnings 15:34:52 From Brad Shapiro : This data and description is really really amazing. I literally don't want to ask you to do anything else, because I don't want to delay your ability to publish this. Just write a bunch of other papers too. 15:34:53 From Maria Polyakova : @Jon, indeed! 15:35:05 From Colleen Carey : Upvote @Brad 15:35:06 From Maria Polyakova : @Brad, thanks! 15:35:12 From Kevin Rinz : @Colleen For physicians, estimation of training duration is described in the appendix. In short, it’s based on when we see changes in earnings that look like starting and finishing residency 15:35:40 From Sarah Miller : +1, very cool! 15:36:00 From Ellie Prager : Kevin I assume you're accounting for fellowship too? 15:36:33 From Maria Polyakova : @Ellie, we don't observe the exact titles of what people are working as, so we are trying to capture discontinuous jumps in earnings. 15:37:01 From Kevin Rinz : @Ellie we don’t have a good way to precisely distinguish residency/fellowship, but yes, we chose rules that require a fairly large earnings jump to signify end of training 15:37:03 From Maria Polyakova : The training/fellowship structure varies a lot across and even within specialties, so would be hard to do this in a not-data-driven way 15:37:06 From Ellie Prager : Thanks Maria. That will correctly account for fellowship as pay continues to be based on "years since graduation" all the way to the end of fellowship 15:39:46 From Maria Polyakova : @Tal, good question. we should report IRRs, but I think we can pretty confidently say that the "AMA cartel" story is unlikely to apply universally, but may exist in some specialties. 15:40:23 From Tim J. Layton : another question about the fee-bump analysis: i'm trying to figure out how you all calculated how much payments to PCPs went up due to the fee bump in order to get the % passed through to the docs' incomes 15:40:32 From Maria Polyakova : Since (a) PCPs are pretty close to reasonable outside options (if you consider our outside options reasonable!) and (b) lots of specialty variation is attributable to hours and training length 15:41:28 From Maria Polyakova : @Tim, good question. A simple calculation is just taking total $ that was spent on the fee bump in the states we consider and total increase in doc income that our estimates imply. Then there is a more nuanced computation using state-level spending number, counts of Medicaid patients, etc. 15:42:15 From Douglas Staiger : There has been a secular decline in physician hours in the cps/acs, and would be interesting to see if hours of PCPs responded to the higher fees. 15:42:22 From Tim J. Layton : ah. thanks! didn't know there was data on the total $ spent on the fee bump by state. nice find! 15:43:01 From Maria Polyakova : @Doug, good point, we haven't tried putting hours on the LHS 15:43:57 From Hugh Shiplett : @Tim, the state-level spending data came from the Medicaid Budget and Expenditure System 15:44:06 From Joshua Gottlieb : @Doug the secular trend in hours also reflects the changing age structure you mentioned. 15:44:45 From Tim J. Layton : Thanks Hugh! 15:52:09 From Tim J. Layton : thanks Amy and Heidi!