インフレ体制における金融政策 | インフレ:要因とダイナミクス会議 2025

At this point we will move on to our second paper. Katarina Petrova will present monetary policy across inflation regimes. Thank you. Um so uh just to say I just moved uh to Kafoskari um and so I’m very glad um and relieved to be in Europe uh permanently back. This is joint work with um Valeria Gerjulo who’s a PhD student in UPF and this is uh the first chapter of her uh PhD thesis and Christian masses. uh and this is the paper on u inflation regimes in particular the question the empirical question that my co-authors ask is whether the effect of monetary policy depends upon the underlying level of inflation and whether other macroeconomic outcomes in the economy depend on the underlying level of inflation. Um most of the research is on linear models with constant parameters. uh but there is emerging evidence now uh from various um from various experimental papers showing that actually agents uh both firms and and and comp and and and consumers may behave very differently when inflation is high. Um so they can they will form uh they can form expectations about the future differently respond to to to information in different way pay more attention um when inflation is high and that may affect um economic outcomes. So in the presence of this type of state dependence of course linear models can give you erroneous empirical conclusions because they would just average across these uh different regimes. Um so the paper has a simple self-exiting basin and VR. Um since I’m the econometrician on the paper I’m going to focus on the methodological contribution. Um and you see the model is very very simple. Uh but there are two things that we’re going to do slightly differently from the existing literature. Um so just to manage your expectations I think I don’t think you learn anything new about inflation from my paper. Um but I’d like to think that the tool uh from the paper can be used for a variety of setups to deal with processes like inflation that may have this nonlinear uh state dependence. Um so the methodological contributions of the paper are twofold. Uh first uh the um the usual approach in the literature will have um either conditional mean of the model uh being regime dependent or whenever you allow for regime dependence in the variance that will be done in the second stage. So you first estimate the threshold using conditional mean regime dependence and then you plug it in and then maybe in the second stage you can allow for dependence in the variance. So the first thing that we do differently is we actually allow for regime dependence in the conditional variance in the first stage that’s going to help us identify the threshold parameter and we’re going to argue that for example in macro uh in macro series we do have this this regime dependence in the variance and this is going to help us get a more precise estimator for the uh for for the threshold parameter. Now this the second contribution is about estimation. So there exists two approaches in the literature. A frequentist and a basin approach to estimating these uh models. Now the frequentist approach uh we’ll set up some sum square residual uh objective function get the threshold estimate the threshold then do inference on the beta parameters and the and the var parameters or the other model parameters. And the main problem is that when you have varss as you know they’re highly parameterized. We have small samples we would like to do some kind of regularization. Now there is the second approach in the literature which is fully basin. So there of course everything cast in basin that’s that helps with the dimensionality. But the issue is that the fully basin approach that deals with the threshold parameter um requires characterizing this really highly non-standard posterior of the threshold parameter which of course requires MCM complicated metropolis step and makes this very very tedious uh and very um hard to estimate. So what we’re going to do here is we’re going to combine these frequencies and basin uh regularization and uh we’re going to be allowed to do that uh because the threshold parameter both frequenties and basin estimates for the threshold parameter are super consistent. So they’re going to be converging to the truth at a faster rate because the posterior for this threshold parameter is going to contract at a faster rate. basically we can we are allowed at least for large samples to ignore uh the the the parameter uncertainty um and I’ll be I’ll be more clear about this in terms of advantages uh relative to frequencies estimators uh we can deal with larger dimensions because we can do we can utilize regularization or priors that are widely used in the literature um in terms of uh advantages relative to the fully basin approach we have a very very fast and efficient uh basin sampling algorithm. The code is um online and I think people like we’ve had some people using it already. So it takes a minute to estimate this model and doesn’t require any MCMC. And then in terms of um I mean I should probably mention this a little bit in terms in in terms of advantages I guess relative to marco switching models marco switching bars and DSGs that are widely used in in macro literature. I’d say that the fact that the the switch the regime switch uh is guided by a state variable that will be chosen by the um by the researcher in the self- exciting models provides a very simple and transparent way to understand the nonlinearity. Just to give you an example with the mark of switching model, you have a latent state variable that’s marco that follows a mark of process that switches but that’s very hard to for example explain to policy makers why did you switch regime and what is driving this latent state variable whereas here it will be really transparent okay um so I will like I said spend most of the time on the methodology and the algorithm maybe show you some Monte Carlo exercise and in passing maybe highlight the the main the main takeout from the empirical exercise. Now the standard the the first univariate threshold of regressive model was introduced by Tong in 1977 journalized in various directions by Tong and Lean Chan uh in in a sequence of papers including papers in the analysis statistics. So here we’re looking at an M by one that’s a multivaried version of of that um of that model. uh I don’t think I can point can you see anything no oh yeah you can perfect so y is an n by one vector and you know of var with k regimes so just ignore that sum for the second inside that bracket here you have your standard br of order p nothing nothing different the only difference is now they’re indexed by i which are the regimes going from one to k so now the coefficients are regime dependent and there is this um indicator function pi that depends on this parameter gamma which is a threshold parameter that’s k minus one dimensional in this k regime model um and that uh indicate so that indicator function is just 01 where depending on whether your where your state variable is uh relative to the to the different thresholds. Now the state variable what is assumed for it usually that’s um either fixed exogenously um so that’s the threshold model or it could be a lag of the uh dependent variable uh it it is always a scalar uh and if it is a lag from the dependent variable uh it it this model becomes a self- exciting model that’s the difference because now you have basically the left hand side lags of the left hand side variable switching your regime that then get gets you back to the left and the right hand side. Um and for the moment I’m keeping the variance fixed. I’m just telling you what the standard model does. Um so these self-exiting models have this beautiful nonlinearity through the indicator functions extremely extremely simple to mo to to to write down and also to estimate. Um but the syntotics is is is not that simple but it is relatively simple because you get peacewise linear small peacewise linear models that’s very nice. Um it also will allow very simple estimation. Um on the same in the same time this self- exciting mechanism can actually capture a lot of very important nonlinearities in the data and this has been documented in all this uh old papers as well this old literature particularly in things like cyclical data. The paper that deres the limit theory is the chan anos statistics where he deres the synthotic distribution and show the super consistency of this gamma parameter and I’ll talk a little bit about that. So how does standard estimation work in this model? Well first of all imagine you know gamma. If you know the threshold you can just go back you can look at the periods where you were in different regimes and then you can just split the sample into subsamples and then you just do your standard estimation over the subsamples as if nothing. And that turns out to be as long as you know you’re staying in it regime fraction of the sample size root and consistent asytoically normal um and everything you can do everything subsamp on the subsamples as if there was no nonlinearity. Now if gamma is unknown you have to estimate gamma and the way that people do this in the the frequentist world they would set up a sum square residual function. Okay, so everything here is stacked. So beta has all the auto reggressive parameters and then gamma’s minimize is the is the minimizer of the already minimized sum square residuals with respect to beta. Okay. Uh and you know equivalently you can get this from a joint minimization with respect. So gamma is the minimizer from the joint minimization. So what you do is you minimize the sum of the squared residuals with respect to beta for a given gamma. So that’s a function of gamma and then you minimize with respect to gamma and that’s your estimator. The standard um assumption in order to get gamma the identification assumption is basically uh for any pair if I consider the vectorzed um regime dependent parameters I need to have that better j is equal to beta i for each pair. So at least one element in that vector needs to defer across a pair of regimes in order for me to be able to consistently estimate that regime. otherwise I have no hope. Right? And so the really cool thing is that uh what which is what Chan finds is that actually that that estimator for gamma is super consistent. It has a rate n as opposed to the square root n rate. If you want to know why like the intuition is if you write the population regression function, it has a discontinuity and it’s that discontinuity that gives you this faster rate of convergence. And so because of this faster rate of convergence, what the frequencies estimators do is well you estimate gamma in the first step and then you you basically fix you you you use gamma hat. You plug it in as if it’s the true value and then you do inference on beta as if uh this was the true value. Now this is what we were told in econometrics never to do like you know when you have beta 1 and beta 2 do not plug in beta 1 hat and then do inference on beta beta 2 because that’s not allowed that’s not allowed when they have the same rate of convergence that’s allowed when you have the faster rate of convergence because you will have some distortion coming from the uncertainty around gamma but that’s higher order that’s not going to affect the limit distribution that’s going to affect the next terms okay so um that’s That’s what the the the the existing literature does. So what’s novel in the in the procedure? We just as I said we deviate with two in two important ways. But I think things are still very simple. This is the bar from the previous slide. The only difference is that now we are actually modeling in the DGP the the fact that the variances are regime dependent. Okay. And we also want to be basian. So because we’re basing we’re going to have to uh whether you are really basing or you just want to regularize the problem we’re going to have to impose some uh distributional assumption on innovation and we’re going to write some priors for beta sigma and gamma because we want fast things of course we’re going to work with conjugate prior you’ll see and we also assume that the prior for gamma is independent in fact we’re going to have the simplest possible prior for gamma we’re going to have a discrete uniform prior so we’re just going to If um you know we’re just going to consider grids of points and k minus one cartisian products of grids of points for gamma and then we’re going to maximize the lo posterior density of the model. So the lo posterior density which we can write as the log likelihood of the model plus the weighted sum of the log uh the log prior uh where here the the omegas are basically the effective sample sizes of each regime. uh and then uh we okay we can write everything in matrices. This makes it very simple to derive the closed form expressions for the posteriors and so on. WTI are basically an indicator functions. So capital W for each regime I is an N byN matrix. It has in the diagonal of the zero ones for the for the regimes. And then you can rewrite the likelihood in a neat form like this where you have this uh chronical product here. And then the maxi the maximizer we’re going to look at the maximizer of this log posterior. And similarly from before joint minimization uh the estimator for gamma coming from the joint maximization is the same as the the one that comes from maxim maximizing the already maximize posterior with respect to beta sigma. So you can do some algebra here. This thing uh you can the maximize posterior is the posterior validated at posterior modes. to this beta beta check sigma check at the posterior modes and then you know we can also use the fact that the prior for gamma doesn’t really matter anywhere apart from the area over which we’re maximizing and what that means in practice is we’re going to um evaluate the lo posterior uh at the posterior mode for many different grid points for the for the threshold parameter and then we’re going to uh maximize that numerically with respect ect to the threshold. Now because gamma hat gamma is going to have a posterior that has a contraction rate n faster we can show that the posterior of the standardized centered var parameters whether you um draw from the posterior of gamma whether you use a plug in consistent basin estimated for gamma or whether you know the true gamma actually those are as the sample size increases uh these posters are the same and that allows us to uh avoid the the the expensive metropolis step. Okay. So everything is in closed form because if you assume Gaussian likelihood then we have the the standard wishard uh normal wishard prior we can derive the posterior uh which is now conditional on gamma and so you know you have um all these posters depend on gamma through these ws that have the the determined regimes and um you have the standard other than that you have the standard result. Now one for each regime. In practice what the the algorithm does is what I told you before. For each point in the grid in the k minus one cartisian product of the of the grid for for the for the threshold. We compute the posterior modes. We then evaluate the log likelihood of the posterior modes. We evaluate the prior of the posterior modes and numerically maximize this posterior with respect to gamma. we store gamma hat and then we use gamma hat uh from that step as fixed plugin and then draw from the posterior from the from the closed form posteriors that I’m giving you here. That’s all that’s a whole paper. There is a Monte Carlo exercise where we look at okay how does this work in practice in terms of you know maximizing this this this um likelihood that has in the first stage the the the variance dependent regime dependence. This is really simple but Valeria I think in her in her second chapter is working to get the theory for this and also have much more complicated setup but you know here we have like just a a mean variance model uh I’m also looking at small and large sample as an econometrician I always want to know what happens the sample size increases not really if the errors small or big whether they’re going to zero or not h we have poor DGP so DGP1 everything’s constant 01 standard normal random variables nothing else in DGP2 there is a break in the mean um so there is a regime dependence in the mean and you have this self exciting mechanism because it’s determined by lag values of the uh of the uh left hand side variable in DB3 we have mean constant mean zero but dep regime dependence in the variances and in DB4 we have both mean and variance regime dependence true value for gamma is zero and we’re going to just look at what the constant parameter model does. I mean you know what it does. Uh regime dependent sum square residuals that’s a standard model in the literature which only uses um regime dependence uh from the first moment. And then we have also the one that is in chan and sai which uses regime dependence to estimate gamma in the first moment but then you also allow dependence regime dependence in the variance in the second stage. And finally our likelihood approach. I’m just going to show you some root mean square error. I know it tables a little bit hard to read. I will point to some of the the main conclusions. So in DB1 when there is no regime dependence, uh obviously the constant model does really well. Um you have errors that that drop uh quite dramatically as you go to the large sample for the mean and the variance. The other models, the sum squared residual model uh also has errors dropping, but they’re a lot bigger and so does the the one with the second stage variances and so does the one with the likelihood and that’s to be expected because you’re modeling regime dependence that isn’t there. So you basically lose efficiency. Um but crucially they’re still valid. You can you can split samples when there is no regime dependence. Um in DGP2 there is break in the mean. Of course, once there’s a regime dependence in the mean, the constant model is failing dramatically and there is no no fixing it. Also, the variance is wrong because the mean is wrong and the variance is based on estimating the mean. The sum square residual model does uh pretty well like expected. So, basically, you know, we have um the the errors dropping for the condition for the mean parameters. You have very precise estimation for the um for the threshold. the same for the one that’s augmented with uh regime dependence variances. But crucially, our likelihood approach does equally well. So basically allowing for regime dependence in the variance when there isn’t such regime dependence in the variance doesn’t contaminate our estimation. Uh which is a good thing. Um in DP3 there is only regime dependence in the variances. So of course um here the sum square residual model cannot estimate uh at all the threshold because it’s only using information from the mean. The mean is still consistently estimated because it doesn’t really matter whether uh the any split works right there’s no regime dependence in the mean but the variances are all off. uh and our our model does really well here like we wanted to to to see and finally in DP4 where there’s break both in the mean and the variances you what you see is um and this is surprising to us originally this was surprising to us but actually in that in that world the sum square residual fails it cannot estimate the threshold correctly originally intuitively we thought well if there is regime dependence both in the mean and the variance surely if we just exploit regime dependence in the mean we should be able to recover that and that’s true in the threshold model where the state variable is exogenous but that’s not true in the self- exciting model. So that’s why we have uh this inability to consistently estimate the threshold. It doesn’t help whether you have regime dependence in the variance in the second stage because you don’t use it to estimate the threshold but the likelihood does uh things as it should. Okay. So let me um quickly go because I have less than five minutes now. There’s some um Christian added this to the paper. This is to do with like more like for the macro audience to explain why if you have a very unusual regime that maybe you can think a little bit as an outlier but you know it’s be fraction of the sample size that you know even if it’s a very infrequent thing if it happens in your sample can completely change the results and uh leads to to wrong conclusions. I think in the last three minutes I’m just briefly going to touch on the application. So this is um monetary policy shock monetary policy shocks using ROR and ROR um proxy for the shock. So this is like off the shelf um well we’re going to use our estimation procedure. I know that you don’t want to see yet another impulse response plot. Um so the main thing I’m going to tell you here we have a bunch of robustness checks in the paper. We’ve tried different samples. We tried to extend the sample 2017. We tried the high frequency instruments and so on. The choice of state variable for us is inflation at a lack t minus one. And this is an earon-year inflation. So um it really doesn’t matter if you add different ls because it’s pretty persistent. So you get more or less the same regimes. But crucially um when we estimate we have three regimes and we have in the paper explanation of why two regimes is not enough. It’s really this unusual period in the sort of 70s where completely can’t contaminate the results and the numbers that come out of the estimation is this 5.5 magic number which actually I should say there’s a paper by Kenova and Ferrero that do a slightly different actually completely different estimation uh they allow for regimes and their num the number that they come up with is 5.3 so we’re very close to that so we have a lower regime when inflation is below that number medium regime when it’s between 5.5 and 11 and a high regime which is really the the thing that soaks up the let’s say outliers or highly unusual periods of the sort of 70s and early 80s oil crisis and this is what the objective function looks like there is a lot of stuff in the in the in the paper on like looking at the raw data looking where the regimes are being estimated and so on I will skip all of that this is just to say if you look at the model implied mean and variances they are different over regimes because remember that’s very important for the identification of the regimes. Um one reduced form finding is if you look at the long run correlation this is infinite horizon you know pair-wise correlation implied from the model across regimes if you look between inflation unemployment and by the way I’m not claiming this is a slope of Philips curve it’s a very reduced form object but it tells you what you know in different regimes what this correlation between the two things is and if you look at in the in the small in the low regime in the low inflation regime it’s negative but it’s really close to zero and small in the medium regime it becomes very negative and actually in our high regime it is positive. So this a kind of stackflation periods persistence is also kind of important for some of the results if you actually compute this kind of auto autocorrelation of the model implied autocorrelation of the variables across regimes. Even if you look here the the horizon is 48. So this is four years. Even at four years, the medium regime has really high autocorrelation for unemp for inflation, unemployment, and the Fed funds rate. So, what’s going to happen is we’re going to have responses that are going to be much longer lived in that regime. It’s going to take much longer for um for for policy to take place and also for for poor outcomes to go back to go back to uh to zero. And um the the only p picture I’m going to leave you with because I have less than two minutes now is we have this impulse responses. This is the first column is the this this the low inflation regime, medium and high. And um these are computed. Remember this is a nonlinear model. So there’s many different ways to compute those. We have in the paper also uh different algorithms on how to compute different things. This is a very simple exercise where we start at the regime at the given regime and then we assume we stay there forever. So this is we only use the parameters. Of course you can do many different things and this are in the paper. Uh but the main thing to say is that you know in the low regime you basically have very little to no effect on unemployment or labor market and very uh very shortly um effect on inflation. Whereas once you go in this medium regime once inflation exceeds 5.5% you start getting this um much bigger uh effect in the labor market and also much more delayed uh and longerlift effect on inflation and like I said there are other ways to compute impulse responses. Let me just conclude in the one in the in the in the last minutes. So we have this uh self-exiting basin and VR in terms of the econometric contributions. They’re they’re marginal but we think they’re important for practitioners. First is that we allow for regime dependence in the variance in the first stage in order to estimate better the threshold and the second is that we combine these frequenties um procedures with basian estimation in order to have a very fast uh uh fast basian algorithm for for estimation of these models. And um you know the advantages is we can handle large dimensions pretty straightforward to estimate um and it it’s a very simple uh and transparent way to model nonlinearity that’s very easy to explain also to policy makers and I I leave you with that. All right. Our discussion is going to be Marco Lombardi from the BIS. Okay. So, thank you. Uh, it’s pleasure to be here. Thanks for inviting me and giving me the chance to read this uh, uh, great paper. I, uh, I invite you all to read it, especially the methodology part. I find it really well written and clear. So um just not saying that because uh you just say that uh that’s your responsibility but I really really like that part. But in fact now let me let me put my my uh policymaking hat even if the usual disclaimer applies and uh focus instead my discussion on what’s in the paper uh not so much for an econometrician but for a policy maker who has an interest in a two regime view of of inflation. And so let me start by the the first sentence in the introduction of the paper which sort of sets the bar pretty high because it reads what are the effects of monetary policy on the economy which to me is kind of a very ambitious uh uh task for for a single paper. Um, one reason why I think this is so ambitious is because especially in among policy makers, it’s not always crystal clear what we mean when we talk about what’s the effect of monetary policy. Uh, I mean as an econometrician you have in mind a certain model and shocks. There are deviations from an implicit policy rule. uh but we can discuss whether this is what uh policy makers or central bankers want to want to know whether it’s the shocks that as a matter of fact they shouldn’t even exist I mean you don’t run you run optimal policy you don’t want to give shocks on top of your your policy rule as a policy maker what you may want to know is whether your your policy framework your policy rule if you want to simplify is suitable and as the best way to to ensure and deliver price stability. Uh now that said, what what the paper does in in practice in in uh in uh in its empirical exercise is to uh focus on the differences in the transmission of monetary policy shocks across these three inflation regimes that are identified using the the threshold uh methodology. And the key finding in terms of the uh what monetary policy does to the economy and what it does to inflation in particular is that inflation reacts strongly to monetary policy shocks only in this medium medium inflation regime. Um so uh now let me let me uh give you more details on on why I think this finding is is interesting and what uh um what what the paper offers in terms of uh um the interpretation of of this result. So let me start from the econometrics again from a from a policym perspective and let me start what is really good and what I really liked about the the threshold approach is that I had the impression that this threshold model was were sort of out of fashion and I found um very important and and and appreciated that here you make a point that these models are worth uh because they’re personimonious uh computation ally light and especially transparent. And I think this is a key advantage over uh models with with you know state variables that need to be filtered out or uh models such as full time varying uh time varying parameter v you do full basin inference which is appreciated. you get the draws from all the parameters which will be important for what I’m going to say uh in a while including the coariances across the regimes. Now what is uh uh a bit more disappointing uh not so much on the econometrics but on on the way the the the model is applied to to the to the data in the empirical exercise is that the sample is constrained. I mean, I understand you want to cover the the great inflation of the 70s, but then by using the ROR and Romer shocks, you’re constrained to um stop the estimation in 2007. Uh and and this is I mean this has two big drawbacks in my in my view. One is that the risk is that you read the result in terms of the great inflation versus the great moderation, which I think it’s it’s not the case and there’s more uh to to what you do. And the second is that you can’t say much about the recent uh the recent wave of um uh of inflation. I I saw that in the appendix you have an an alternative with shocks up to 2017, but but I’m wondering whether you could you could uh um use something that covers the recent wave of inflation and then say something about uh you know how in which regime did we end up and how we went out of that um negative regime back hopefully into a low inflation regime. Okay. So, uh that said, let’s go to my uh comments on on the results and what what they say about monetary policy if if I take them at face value. So, here I copied the your IFS of the monetary policy shock on inflation across the the three regimes. Um so what I what I get out of this again if I take the IRFs at at face value is that if I’m in a low inflation regime uh then you see that there is some some reaction of uh of inflation to a policy shock but it’s rather small and so this would suggest that trying to do fine-tuning of inflation when inflation is already low it’s difficult and potentially very costly. Now if we move to the medium uh inflation regime that that is where I see that monetary policy had has good traction. So in that regime it looks like you can uh effectively um you know steer inflation with your your with your monetary policy actions. But once you enter in the the so-called high inflation regime, then it looks like again if you take that IRF at face value, you can’t get out of it through monetary policy and just you the only thing you can do is cross your fingers and hope that you get other favorable shocks that get you out of that of that regime. Um now uh the the the good thing on uh from your your approach is that uh having the full posterior distribution of all the parameters can really help you shedding light on why this is happening and what what you can do about it. So let me start with something you showed already. That’s the the the reduced form Philips curve or if you want Philips correlation across the regimes and this to me this is what explains why in in the high inflation regimes things don’t work as you’d like. You see that in in the high inflation regime that is the bars on uh on on the right if anything the the the Philips curve is positively slope. This explains why a monetary policy shock in its strict sense won’t do much to to inflation. Now that said, we know that in in the 70s uh the US was arguably in in in a high inflation regime and they got out of it through monetary policy and not through crossing their fingers and hoping for for uh positive shocks, you know. But there that happened not through monetary policy shocks within the same policy rule but rather by changing the policy frameworks and uh you know committing to something different that uh was was uh not consistent with the framework that had been adopted through the 70s. So how do the implicit policy rules in these three regimes may look like? again uh having the full posterior distributions you can kind of uh dig into that and get a bunch of useful uh findings on that. So here the implicit policy rules what I mean is how monetary policy systematically reacts to unemployment and inflation to past developments in unemployment and inflation and whether that is consistent with price stability. Of course, if a policy rule is not consistent, it stands to reason that policy shocks within that rule won’t do much. And as I was saying, you can get some insights on this u uh on these policy rules first by looking at the contemporaneous correlations. That’s something I found in in in the appendix of the paper. Uh from here you can see that in the um in the uh sorry well whatever in in in the the medium and high inflation regime there’s a uh reaction of um policy rates to uh inflation as well as to uh to unemployment and uh but in the high inflation regime there’s the the the coefficient on unemployment is is sort of much lower. Whereas a low inflation regime, the coefficient on inflation is much higher. Again, this is just a contemporaneous correlation. This is not the implicit policy rule. But by looking at the full posterior distribution at at the the whole all the coefficients, you can get a proper representation of these implicit policy rules and I think it would be very interesting to see it as you know a spin-off paper in which you leverage more on this this empirical results. So let’s bring this to the end. As I said, kudos for the the methodology. I really liked it and it’s it’s explained very clearly. So I it’s it’s really excellent and it nicely illustrates the power of having a full basin approach to this sort of of uh of problems. And as I said, it opens the the the way for additional follow-up work, including doing more with the posterior distributions that uh that you have to characterize different features of the economy across regimes. uh and yet another extension to be expanding the V for example to include wages and oil or import prices and that would help you shedding some light on the persistence and the transmission of supply shocks across the different regimes. Uh and and more more specifically here the policy question I have in mind is whether depending on the regime is it safe or not to look through uh the supply shocks that you may face. Um, so, uh, that’s it on my side. Thanks very much again. And, uh, yeah, thanks, Marco. Uh, questions from the floor. Okay. So, first of all, thank you because I was working on uh similar models and I was calibrating the direction. Now I know how to estimate it very fast. So, thanks a lot. Uh so, I have two question one clarifying. So, why the normal wish and not the general uh you know setup in which you know you have a more general bio and uh um a question about uh endogenity of current shocks you know. So now the the regime is is predetermined right but I mean in the example you’re bringing you might have for example current uh monetary policy talks that will uh shift inflation from uh you know let the threshold you know inflation go uh above or below the threshold right so uh did you think about that thank you any other questions so maybe one from my perspective so um you know there’s certainly evidence that econom behaves differently at high inflation versus low inflation. But there are other ways to think about multiple regimes or nonlinearities. So, you know, you can certainly go through your variables and compare how your specification compares with, you know, other thresholds that you might put in there as well. All right. Please, thank you so much. Um, so let me first um thank you for the for the great uh discussion. So, you asked me a couple of things. First of all about the sample you’re right we wanted to go back to the 70s and the high frequency uh shock uh measures don’t allow us to do that um we have done robustness with the high frequency stuff going all the way to the end of the sample and there you will not get the the large the high inflation regime because there is no data that exceeds 11% inflation that’s but other than that the results are very very similar um But to try to um uh speculate a little bit, we think that not having the most recent episode in the sample and then being able to say something about it that is consistent with what happened certainly on the labor market um is could be a good thing because exante expose everything is predictable. So if we had this episode in our sample then we would simply be fitting the data and the parameters to that particular episode. So not having it at least this is you know one way to to speculate about this um on the IRFs for for inflation. I I I I totally agree with with you and actually your suggestion to look deeper into the policy rules in the basin posterior is is really useful. Unfortunately, we had the paper just accepted and I don’t think that we can incorporate it in this uh in in this paper, but I Valeria continues working on this. I’m hoping that maybe I’ll pass that that on to her. Um then you comment on the large larger set of variables for sure again isn’t in this paper but the methodology uh the fact that we can do much larger models and shrink and do fast inference means that maybe uh the methodology is useful for for looking at you know wages and supply and demand shocks. Um then um I had questions so on the um normal wish normal wisher prior I mean the answer is convenience right because when you have the Gaussian uh when you assume galsianity and then you have the normal wisher prior you don’t have to do MCMC right you have closed form conjugate conditional and gamma conjugate posterior so you draw using simple Monte Carlo draws of course you can do anything else but then you you start getting slower not very slow but again you will have to do a proper MCMC uh in in in in that setup but no other reason other than that also from a sort of more synthetic perspective the prior doesn’t matter right uh it’s just a device in order to penalize uh so it and you know Minnesota type prior is something we know how to use really well because instead of having to specify this massive shrinkage uh variance matrix we just have to pick one shrinkage parameter and so makes everything really simple. Um on the endogenity issue, I think that I mean I understand that from um structural point of view like it’s very interesting to ask what happens when the current FT like uh variable shifts the regimes. But I know that from an econometric point of view, you cannot you cannot estimate that model. uh you could if your state variable wasn’t uh one of the dependent variables. If your state variable was some exogenous thing that came from outside the model and then you write all the all the assumptions of the model conditional on that exogenous state variable at time t then you’re in the game. But in the self- exciting model you cannot you you cannot estimate the model. you will have all the assumptions being violated. If you if you added that ST, you would really have the left hand side and the right hand left hand side variable at time t appearing on the right hand side of the equation. You can try and rewrite it, but I don’t think that that estimation will work in that uh in in in that um in that framework. Um and um thank you for for for the question as well. So in terms of of course the methodology is generally I think you can add any state variables. We played we’ve played with this uh we tried I I guess we tried having interest rate as a as as a state variable and have like above and below zero or above and below something effectively close to zero. You can do that. You can also write a much more complicated state variable that’s a for example interaction between state variables. you have you know low inflation, low unemployment, low inflation, high unemployment and so on. You can do all of that. Um at the end we did some robustness checks and we did try but from all the evidence that we have on how agents behave like this um recent RCT paper we thought that the relevant state variable is inflation uh because because also that is what the policy maker is is is is focused on. Good. Thanks. Anything else? Any other questions? Yes, please. Just a quick question maybe also to connect to the previous paper. So you validate the model through Monte Carlo. It’s more than fine. But did you try to look at the forecast of the model or the forecast performance of the model is even compared to other possible linear nonlinear model and in particular because your conditional forecast would be nonline right uh and and you can measure a balance for risk because would be a mixture of three normal I guess depending on the probability of ending up in one of the three regime tomorrow. Thank you very much. Um, so first on the Monte Carlo exercise, the reason why we have this is kind of like a sanity check because like I said to do the full consistency proof for this, you know, regime in the variance. That’s something that I think Valera needs to work in her second uh so this wasn’t really part of the paper. We wanted to make sure that everything is and everything works exactly like expected. Um, in terms of condition forecast, you’re right like this model can generate really interesting stuff. I actually had a lot on that. Uh, and that didn’t at the end it’s not in the paper. We got rid of it. But you’re you’re right that this model is interesting to forecast and also even for computing the different impulse responses because what happens is first of all initial condition matters. So what today what state of the economy you are would determine which parameters to use because it determines which regime you’re at. But then once you start going forward and you know computing the forecasts you will have you know either you drawing shocks or you you know uh different shocks different sizes and signs of shocks will push you in different regimes and then once you’re in a different regime you change the parameters and so you get really interesting uh really interesting conditional forecasts which are not the same at any different point in time. the model we imply very different uh condition forecasts. Um it’s definitely something we looked at in the end the paper is is is finished and it’s about only about policy. But I agree that this is from from my perspective is a lot more interesting. But you see a little bit of this conditional forecast in the impulse respon in the nonlinear impulse responses that we compute because in those not the ones I showed you can you have different starting dates and then you you move forward with a model given the shock and without the shock you use the full nonlinearity of the model and in the end you compute the impulse response as the difference between the universe with the shock minus the universe without the shock. Thank you. Right, it is 1700 and we are right on time. So, let’s give our last panel a round of applause, please.

Session 5 on Inflation and Monetary Policy at the Inflation: Drivers and Dynamics Conference 2025. Session chair: Edward Knotek, Federal Reserve Bank of Cleveland.

Monetary Policy across Inflation Regimes
• Presenter: Katerina Petrova, Federal Reserve Bank of New York, Universitat Pompeu Fabra, and Barcelona School of Economics
• Valeria Gargiulo, Universitat Pompeu Fabra
• Christian Matthes, Indiana University
• Discussant: Marco Lombardi, Bank for International Settlements

Watch all sessions from the conference: https://www.youtube.com/playlist?list=PLnVAEZuF9FZnje5bTTZ9sJKe4TR0OlIbA

See the conference programme here:
https://www.ecb.europa.eu/press/conferences/html/20250929_inflation_conference.en.html