Luciano I de Castro

Luciano I de Castro

ALL PUBLICATIONS

Payoff quantiles have been used for decision making in banking and investment (in the form of Value-at-Risk) and in the mining, oil and gas industries (in the form of “probabilities of exceeding” a certain level of production). However, it is unknown how common quantile-based decision making actually is among typical individual decision makers. This paper describes an experiment that aims to (1) compare how common quantile decision making is relative to expected utility maximization, and (2) estimate risk attitude parameters under the assumption of quantile preferences. The experiment has two parts. In the first part, individuals make pairwise choices between risky lotteries, and the competing models are fitted to the choice data. In the second part, we directly elicit a decision rule from a menu of alternatives. The results show that a quantile preference model outperforms expected utility for a considerable minority, 30%–50%, of participants, depending on the metric. The majority of individuals are risk averse, and women are more risk averse than men, under both models.

This paper develops a model for optimal portfolio allocation for an investor with quantile preferences, i.e., who maximizes the τ-quantile of the portfolio return, for τ ∈ (0,1). Quantile preferences allow to study heterogeneity in individuals’ portfolio choice by varying the quantiles, and have a solid axiomatic foundation. Their associated risk attitude is captured entirely by a single dimensional parameter (the quantile τ), instead of the utility function. We formally establish the properties of the quantile model. The presence of a risk-free asset in the portfolio produces an all-or-nothing optimal response to the risk-free asset that depends on investors’ quantile preference. In addition, when both assets are risky, we derive conditions under which the optimal portfolio decision has an interior solution that guarantees diversification vis-\`a-vis fully investing in a single risky asset. We also derive conditions under which the optimal portfolio decision is characterized by two regions: full diversification for quantiles below the median and no diversification for upper quantiles. These results are illustrated in an exhaustive simulation study and an empirical application using a tactical portfolio of stocks, bonds and a risk-free asset. The results show heterogeneity in portfolio diversification across risk attitudes.

This book contains articles and short notes presented on the Interdisciplinary Symposium on Brazilian Political System, realized in IMPA, July 2021.

This paper provides an up-to-date comparison of Brazil’s political system with that of 33 other democracies. We show that Brazil is an outlier with respect to the number of effective parties, the total government budget allocated to the legislative power, and the public funds allocated to parties (to fund campaigns and regular party operations). Brazil is also unique in its electoral management body: it is the only country in our sample in which the judiciary both organizes and oversees the electoral process. We also find a positive correlation between total public funding and the total number of effective parties.

Minimum income guarantee programs have been considered or proposed for decades (or even centuries), but support for such programs appears to have increased recently. Even so, there are still resistances and difficulties in its implementation. One of the difficulties concerns the definition of its value. It is generally argued that the value should be sufficient to “live reasonably and freely”,  but this does not lead to an objective figure. This article proposes a methodology to define the value of the minimum income, based on what we call intergenerational equity. This principle requires that current fiscal policy rules apply to all future generations. Following it, an estimate of “social equity” is arrived at, which can be converted into a dividend for each citizen. The idea also provides a way to assess the long-term effects of fiscal policies as an added benefit

The rules that define the length of stay in power are central aspects of a democratic regime. These include the term limits, the possibility or not of  reelection and the prerogative to dismiss representatives before the end of the term, which in the United States is called recall. This article details a format of recall that seems convenient for Brazil: setting the term limit of mayors,  governors and president at 8 years, but holding a recall every 2 years. Thus, the electorate will be able to limit the damages of an incompetent or corrupt ruler,  while benefiting from an effective government with a longer term. We discuss the advantages and disadvantages of the proposal, compared to a longer single term and the current reelection rule.

This paper axiomatizes static and dynamic quantile preferences. Static quantile preferences specify that a prospect should be preferred if it has a higher τ -quantile, for some τ ∈ (0, 1), while its dynamic counterpart extends this to take into account a sequence of decisions and information disclosure. An important motivation for the axiomatization that leads to this preference is the separation of tastes and beliefs. We first axiomatize quantile preferences for the static case with finite state space and then extend the axioms to the dynamic context. The dynamic preferences induce an additively separable quantile model with standard discounting, that is, the recursive equation is characterized by the sum of the current period utility function and the discounted value of the certainty equivalent, which is a quantile function. These preferences are time consistent and have a simple quantile recursive representation, which gives the model the analytical tractability needed in several fields in financial and economic applications. Finally, we study the notion of risk attitude in both the static and recursive quantile models. In quantile models, the risk attitude is completely captured by the quantile τ , a single-dimensional parameter. This is simpler than in expected utility models, where in general the risk attitude is determined by a function.

 

This paper derives several novel properties of conditional quantiles viewed as nonlinear operators. The results are organized in parallel to the usual properties of the expectation operator. We first define a τ-conditional quantile random set, relative to any sigma-algebra, as a set of solutions of an optimization problem. Then, well-known properties of unconditional quantiles, as translation invariance, comonotonicity, and equivariance to monotone transformations, are generalized to the conditional case. Moreover, a simple proof for Jensen’s inequality for conditional quantiles is provided. We also investigate continuity of conditional quantiles as operators with respect to different topologies and obtain a novel Fatou’s lemma for quantiles. Conditions for continuity in Lp and weak continuity are also derived. Then, the differentiability properties of quantiles are addressed. We demonstrate the validity of Leibniz’s rule for conditional quantiles for the cases of monotone, as well as separable functions. Finally, although the law of iterated quantiles does not hold in general, we characterize the maximum set of random variables for which this law holds, and investigate its consequences for the infinite composition of conditional quantiles.

This paper conducts a laboratory experiment to assess the optimal portfolio allocation under quantile preferences (QP) and compares the model predictions with those of a mean-variance (MV) utility function. We estimate the risk aversion coefficients associated to the individuals’ empirical portfolio choices under the QP and MV theories, and evaluate the relative predictive performance of each theory. The experiment assesses individuals’ preferences through a portfolio choice experiment constructed from two assets that may include a risk-free asset. The results of the experiment confirm the suitability of both theories to predict individuals’ optimal choices. Furthermore, the aggregation of results by individual choices offers support to the MV theory. However, the aggregation of results by task, which is more informative, provides more support to the QP theory. The overall message that emerges from this experiment is that individuals’ behavior is better predicted by the MV model  when it is difficult to assess the differences in the lotteries’ payoff distributions but better described as QP maximizers, otherwise.

This note proposes a non-linear GMM quantile regression model to estimate the quantile as anadditional parameter. The limiting distribution is studied. An empirical application to an intertemporal consumption model built on a structural dynamic quantile utility model illustrates the estimator. Using US data, it separately estimates the elasticity of intertemporal substitution and the risk attitude, which is captured by the estimated quantile.

Rational expectations equilibrium seeks a proper treatment of behavior under private information by assuming that the information revealed by prices is taken into account by consumers in their decisions. Typically agents are supposed to maximize a conditional expectation of state-dependent utility function and to consume the same bundles in indistiguishable states [see Allen (Econometrica 49(5):1173–1199, 1981), Radner (Econometrica 47(3):655–678, 1979)]. A problem with this model is that a rational expectations equilibrium may not exist even under very restrictive assumptions, may not be efficient, may not be incentive compatible, and may not be implementable as a perfect Bayesian equilibrium (Glycopantis et al. in Econ Theory 26(4):765–791, 2005). We introduce a notion of rational expectations equilibrium with two main features: agents may consume different bundles in indistinguishable states and ambiguity is allowed in individuals’ preferences. We show that such an equilibrium exists universally and not only generically without freezing a particular preferences representation.  Moreover, if we particularize the preferences to a specific form of the maxmin expected utility model introduced in Gilboa and Schmeidler (J Math Econ 18(2): 141–153, 1989), then we are able to prove efficiency and incentive compatibility. These properties do not hold for the traditional (Bayesian)  Rational Expectation Equilibrium.

This paper develops a dynamic model of rational behavior under uncertainty, in which the agent maximizes the stream of future τ-quantile utilities, for τ ∈ (0 1). That
is, the agent has a quantile utility preference instead of the standard expected utility. Quantile preferences have useful advantages, including the ability to capture hetero-
geneity and allowing the separation between risk aversion and elasticity of intertemporal substitution. Although quantiles do not share some of the helpful properties of expectations, such as linearity and the law of iterated expectations, we are able to establish all the standard results in dynamic models. Namely, we show that the quantile preferences are dynamically consistent, the corresponding dynamic problem yields a value function, via a fixed point argument, this value function is concave and differentiable, and the principle of optimality holds. Additionally, we derive the corresponding Euler equation, which is well suited for using well-known quantile regression methods for estimating and testing the economic model. In this way, the parameters of the model can be interpreted as structural objects. Therefore, the proposed methods provide microeconomic foundations for quantile regression methods. To illustrate the developments, we construct an intertemporal consumption model and estimate the discount factor and elasticity of intertemporal substitution parameters across the quantiles. The results provide evidence of heterogeneity in these parameters.

We consider estimation of finite-dimensional parameters identified by general conditional quantile restrictions, including instrumental variables quantile regression. Within a generalized method of moments framework, moment functions are smoothed to aid both computation and  precision. Consistency and asymptotic normality are established under weaker assumptions than previously seen in the literature, allowing dependent data and nonlinear structural models. Simulations illustrate the finite-sample properties.  An in-depth empirical application  estimates the consumption Euler equation derived from quantile utility maximization. Advantages of quantile Euler equations include robustness to fat tails, decoupling risk attitude from the elasticity of intertemporal substitution, and error-free log-linearization.

This note argues that we should pursue the promotion of renewables through voluntary action, not through decisions taken and enforced by governments.

A fundamental result of modern economics is the conflict between efficiency and incentive compatibility, that is, the fact that some Pareto optimal (efficient) allocations are not incentive compatible. This conflict has generated a huge literature, which almost always assumes that  individuals are expected utility maximizers. What happens if they have other kind of preferences? Is there any preference where this conflict does not exist? Can we characterize those preferences? We show that in an economy where individuals have complete, transitive, continuous and monotonic preferences, every efficient allocation is incentive compatible if and only if individuals have maximin preferences.

We introduce a dispatch model of Colombia’s independent system operator in order to study the relative merits of self-commitment vs. centralized unit commitment. We capitalize on the transition that took place in Colombia in 2009 from self-unit commitment to centralized unit commitment and use data for the period 2006–2012. In our analysis we simulate a competitive benchmark based on estimated marginal costs, startup costs and opportunity costs of thermal and hydro plants. We compare the differences between the self-commitment for the period 2006–2009 and the competitive benchmark to the differences between the bid-based centralized unit commitment and the competitive benchmark after the transition. Based on these comparisons we estimate changes in deadweight losses due to misrepresentation of cost by bidders and dispatch inefficiency. The results suggest that centralized unit commitment has improved economic efficiency, reducing the relative deadweight loss by at least 3.32%. This result could in part be explained by the observation that, before 2009, there was an underproduction of thermal energy relative to the competitive benchmark and it supports the claim that dispatch efficiency has improved after the transition.

This chapter discusses economic aspects of the Smart Grid set of innovations.

This paper proposes a new general class of strategic games and develops an associated new existence result for pure-strategy Nash equilibrium. For a two-player game with scalar and compact action sets, existence entails that one reaction curve be increasing and continuous and the other quasi-increasing (i.e., not have any downward jumps). The latter property amounts to strategic quasi-complementarities. The paper provides a number of ancillary results of independent interest, including sufficient conditions for a quasi-increasing argmax (or non-monotone comparative statics), and new sufficient conditions for uniqueness of fixed points. For maximal accessibility of the results, the main results are presented in a Euclidean setting. We argue that all these results have broad and elementary applicability by providing simple illustrations
with commonly used models in economic dynamics and industrial organization.

We introduce the idea of implementation under ambiguity. In particular, we study maximin efficient notions for an ambiguous asymmetric information economy (i.e., economies where agents’ preferences are maximin à la Wald, 1950). The interest on the maximin preferences lies in the fact that maximin efficient allocations are always incentive compatible (de Castro and Yannelis, 2009), a result which is false with Bayesian preferences. A noncooperative notion called maximin equilibrium is introduced which provides a noncooperative foundation for individually rational and maximin efficient notions. Specifically, we show that given any arbitrary individually rational and ex-ante maximin efficient allocation, there is a direct revelation mechanism that yields the efficient allocation as its unique maximin equilibrium outcome.  Thus, an incentive compatible, individually rational and efficient outcome can be reached by means of noncooperative behavior under ambiguity.

In a partition model, we show that each maximin individually rational and ex ante maximin efficient allocation of a single good economy is implementable as a maximin equilibrium. When there are more than one good, we introduce three conditions. If none of the three conditions is satisfied, then a maximin individually rational and ex ante maximin efficient allocation may not be implementable. However, as long as one of the three conditions is satisfied, each maximin individually rational and ex ante maximin efficient allocation is implementable. Our work generalizes and extends the recent paper of de Castro et al. (Games Econ Behav 2015. doi:10.1016/j.geb.2015.10.010).

This paper attempts to shed light on the relative merits of centralized electricity markets with multipart bids and dispatch using an MIP-based unit commitment optimization approach vs. self-committed markets with linear energy supply curves. We conduct an empirical study of data from the Colombian market, which in 2009 transitioned from a self-commitment paradigm to a centralized unit commitment approach where generators offer a linear supply function for energy along with start-up costs while the commitment and dispatch are determined by  the system operator using MIP-based optimization. The results indicate that the transition to centralized dispatch has resulted in productive efficiency gains through a decrease in production costs. However, these gains have not translated into wholesale price decreases; in fact,  wholesale prices increased after the change in the dispatch approach. These results suggest that productive efficiency gains have been captured by suppliers through the exercise of market power.

Agent-based simulations may be a way to model human society behavior in decisions under risk. However, it is well known in economics that Expected Utility Theory (EUT) is flawed as a descriptive model. In fact, there are some models based on prospect theory (PT), that try to provide a better description. If people behave according to PT in finance environments, it is arguable that PT based agents may be a better choice for such environments. We investigate this idea in a specific risky environment, a financial market. We propose an architecture for PT-based agents. Due to some limitations of the original PT, we use an extension of PT called Smooth Prospect Theory (SPT). We simulate artificial markets with PT and traditional(TRA) agents using historical data of many different assets over a period of 20 years. The results
showed that SPT-based agents provided behavior that is closer to real market data than TRA agents, and that the improvement when using SPT rather than TRA agents is statistically significant. It supports the idea that PT based agents may be a better pick to model the behaviour of agents in risky environments.

We study the impact of product definition in electricity auctions. Recognizing the key role of the auction rules—pay as bid, uniform—the definition of the product itself emerges also as a critical step. Poorly designed products may impact both the market performance and the physical operation of the system. We investigate the impacts that the product definition can have on the market outcomes. A product definition implemented in some electricity markets is used to unveil critical aspects that must be considered when electricity products are defined. Our results provide guidelines for improving the product definition in electricity auctions.

This paper reconsiders the well-known comparison of equilibrium entry levels into a Cournot industry under free entry, second best (control of entry but not production) and first best (control of entry and production). Allowing for the possibility of limited increasing returns to scale in production, this paper generalizes the conclusion of Mankiw and Whinston (1986) [10], that under business-stealing competition, free entry yields more firms than the second-best solution. We also show that under-entry always holds under business-enhancing competition. This confirms the general intuition given by Mankiw and Whinston, which does not rely on the convexity of the cost function. The same result is shown to extend (at a similar level of generality) to the comparison between free entry and the first best socially optimal solution, irrespective of business-stealing. Three illustrative examples are provided, one showing that the second-best and free entry solutions may actually coincide.

A preference is invariant with respect to a set of transformations if the ranking of acts is unaffected by reshuffling the states under these transformations. For example, transformations may correspond to the set of finite permutations, or the shift in a dynamic choice model. Our main result is that any invariant preference must be parametric: there is a unique sufficient set of parameters such that the preference ranks acts according to their expected utility given the parameters. Parameters are characterized in terms of objective frequencies, and can thus be interpreted as objective probabilities. By contrast, uncertainty about parameters is subjective. The preferences for which the above results hold are only required to be reflexive, transitive, monotone, continuous, and mixture linear.

This note relates ambiguity aversion and private information, by offering an interpretation of the Ellsberg’s paradox in terms of incompleteness of preferences. We adopt the standard model of information in terms of a σ-algebra of Sigma events. These events are the events that the decision maker is informed about and therefore able to judge its likelihood by attaching a probability value to them. Note that the decision maker is unable to compare acts that are not measurable with respect to Sigma, because those cannot be integrated using the standard expected utility framework. Her preferences are, therefore, incomplete. Facing a decision problem that requires comparing non-measurable acts, the decision maker is confronted with the problem of completing her preferences. Some natural ways of completing the preferences lead to the behavior described by the Ellsberg’s thought experiment.

Smart grid technologies may bring substantial advantages to society, but the required investments are sizable. This paper analyzes three main issues related to smart grids: reliability, demand response and cost recovery of investments. In particular, we show that generators will lose profits as a direct effect of demand response initiatives, and most of the benefits of smart grids cannot be easily converted into payments. Moreover, there are potential issues in the choices made by utilities for providing smart grids, and the reliability pertinent to smart grids is a kind of public good.

Load forecasting is a central task for operating, maintaining, and planning power systems. Because of this importance, many different methods are proposed to forecast load, but none of them is proved clearly superior. This paper proposes a prediction market to forecast electricity demand that has the advantage of allowing aggregation and competition among the many available methods. We describe how to implement a simple prediction market for continuous variables, using only contracts based on binary variables. We also discuss possible pitfalls in the implementation of such a market.

Establishing existence and characterizing equilibria are both important achievements in the study of auctions. However, we recognize that equilibria existence results form the basis for well accepted characterizations. In this survey, we review the landmark results and highlight open questions regarding equilibria existence and characterizations in auctions. In addition, we review the standard assumptions underlying these results, and discuss the suitability of the Nash equilibrium solution concept. We focus our review on single-object auctions, but also review  results in multi-unit, divisible, combinatorial and double auctions.

This paper introduces new core and Walrasian equilibrium notions for an asymmetric information economy with non-expected utility preferences. We prove existence and incentive compatibility results for the notions we introduce. We also discuss a framework for ex ante, interim and ex post preferences.

What is the effect of ambiguity aversion on trade? Although in a Bewley’s model, ambiguity aversion always leads to less trade; in other models, this is not always true. However, we show that if the endowments are unambiguous, then more ambiguity aversion implies less trade for a very general class of preferences. The reduction in trade caused by ambiguity aversion can be as severe as to lead to no trade. In an economy with MEU decision makers, we show that if the aggregate endowment is unanimously unambiguous, then every Pareto optima allocation is also unambiguous. We also characterize the situation in which every unanimously unambiguous allocation is Pareto optimal. Finally, we show how our results can be used to explain the home-bias effect. As a useful result for our methods, we also obtain an additivity theorem for CEU and MEU decision makers that does not require comonotonicity.

Smart Grid (SG) technologies may bring substantial advantages to society, but the required investments are also sizable. This paper establishes a framework for examining the issues related to the SG, and highlights some of the difficulties in establishing a mechanism for paying SG costs. In particular, we show that generators will lose profits as a direct effect of demand response initiatives, and most of the benefits of SG cannot be easily converted into payments.

We provide an overview of the idea of subjective probability and its foundational role in decision making and modern management sciences. We highlight the role of Savage’s theory as an organizing methodology to guide and constrain our modeling of choice under uncertainty, rather than a substantive statement subject to refutations by experimental or psychological evidence.

A parallel of education with transformative processes in standard markets suggest that a more severe control of the quality of the output will improve the overall quality of the education. This paper shows a somehow counterintuitive result: an increase in the exam diffculty may reduce the average quality (productivity) of selected individuals. Since the exam does not verify all skills, when its standard rises, candidates with relatively low skills emphasized in the test and high skills demanded in the job may no longer qualify. Hence, an increase in the testing standard may be counterproductive. One implication is that policies should emphasize alignment between the skills tested and those required in the actual jobs, rather than increase exams’ difficulty.

Many conditions have been introduced to ensure equilibrium existence in games with discontinuous payoff functions. This paper introduces a new condition, called regularity, that is simple and easy to verify. Regularity requires that if there is a sequence of strategies converging to s∗ such that the players’ payoffs along the sequence converge to the best-reply payoffs at s∗, then s∗ is an equilibrium. We show that regularity is implied both by Reny’s better-reply security and Simon and Zame’s endogenous sharing rule approach. This allows us to explore a link  between these two distinct methods. Although regularity implies that the limits of epsilon-equilibria are equilibria, it is in general too weak for implying equilibrium existence. However, we are able to identify extra conditions that, together with regularity, are sufficient for equilibrium existence. In particular, we show how regularity allows the technique of approximating games both by payoff functions and space of strategies.

Within the private-values paradigm, we construct a tractable empirical model of equilibrium behavior at first-price auctions when bidders’ valuations are potentially dependent, but not necessarily affiliated. We develop a test of affiliation and apply our framework to data from low-price, sealed-bid auctions held by the Department of Transportation in the State of Michigan to procure road-resurfacing services: we do not reject the hypothesis of affiliation in cost signals.

The translation of statements from auctions to procurements is not always straightforward. We define a duality relationship between them and provide the appropriate transformations needed for establishing it. Additionally, we prove that affiliation is preserved under these transformations and establish the linkage principle for procurements.

We prove the existence of monotonic pure strategy equilibrium for many kinds of asymmetric auctions with n bidders and unitary demands, interdependent values and independent types. The assumptions require monotonicity only in the own bidder’s type. The payments can be a function of all bids. Thus, we provide a new equilibrium existence result for asymmetrical double auctions and a small number of bidders. The generality of our setting requires the use of special tie-breaking rules. We present an example of a double auction with interdependent values where all equilibria are trivial, that is, they have zero probability of trade. This is related to Akerlof’s “market for lemmons” example and to the “winner’s curse,” establishing a connection between them. However, we are able to provide sufficient conditions for non-trivial equilibrium existence.

This paper considers a very general class of single or multi-unit auctions of indivisible objects. The model allows for interdependent values, multidiminensional types and any attitude towards risk. Assuming only optimal behavior, we prove that each bid is chosen in order to equalize the marginal benefit to the marginal cost of bidding. This generalizes many existing results in the literature. We use this characterization to obtain sufficient conditions for truthful bidding, monotonic best reply strategies and identification results for multi-unit auctions.



Go to homepage