*The paper works as a whole. The reviewer didn't want anything cut: '''It is a collection of case studies and definitions''', and '''I don't have ... major comments'''.
It's Reviewer 3's comments are more problematicthat I need to address. As is often Jim kindly reminded me to ascribe only normative motivations to the case, I wondered whether the reviewer actually read the paper:*The paper advances seven new measures, not 15 as the reviewer claims*Policy cartels are introduced in section 4.1 Reviewer (out of 5rather than positive theories), and aren. He also said to '''t follow the main focus review''' and respond to each point with one of the paper per sethree sentiments:*I never use the entire battery of measures -- different measures are applicable in different contexts Disagree*A key Good point but beyond the scope of the paper is to stop organizations from self-selecting into measures that they do well on*All bar two sentences of the 'substantive material' in the review (i.e., from "FirstGood point and I address it like this..." to the end) don't mention anything to do with the paper!
At this point in my career, I'm uniquely placed to give the response He reminded me that every academic really wants to give to the 'self-aggrandizing idiot' median review is a reject, and that somehow always ends up controlling reviewer does say::"The defining of key terms is a useful contribution of this paper, as are the fate identification of our hard workpotentially useful metrics of HGHT entrepreneurship. Furthermore, namely:the examples are often helpful in highlighting their various applications."
The reviewer listed the following three major flaws::"Dear Reviewer. After carefully considering your commentsIn my view, I would like the offer major flaws in the following response: I find your suggestions for my beautiful paper to puerile/irrelevant/narcissistic/useless/stupid/all-study are not (1) considering possible downsides of-each individual metric, (2) considering possible downsides in using the-above entire battery of measures, and (delete as appropriate3), so I'm going testing alternatives to ignore them and, by extension, you. Up yoursthis measurement approach."
HoweverSo, it does seem I added material from the rest of his review (see below) to create the following bullet-points that I'''some''' ll address:#Consider possible downsides of the individual metrics.#Consider possible downsides in using the entire battery of reviewer 3's comments would lead measures.#Test alternatives to improvements this measurement approach and test the effects of using a framework on various outcomes.#Discuss the welfare implications of putting more information in the paper. These arehands of policymakers, including:#Being clearer #It is not necessarily true that "standardized measures aren't a panaceameasurement" improves outcomes, such as policy, overall. All frameworks are not equal and that any conceived framework is not necessarily better than nothing.#A better discussion of #Measurements are rarely if ever neutral (vis-à-vis behavior). There can be systematic bias due to gaming metrics incentivized by rewards attached to the measurement.#Consider the relationship between the conceptual phenomenon and incentiveshow it is operationalized:#Better framing in the front end #Discuss alternative calculations of why these measures make sense a measure when applicable (likely using the points from RFP i.e., for the special issueranking measure).##Discuss that measurement is reductive.##Discuss systematic bias.
I would need to address Reviewer 3's comments point-by-pointAlso for reference, so here they are (re-ordered)the measures in the latest version of the paper:#Discuss when "standardized measurement" improves outcomes, when it does, *Measure 1 (Apportioned investment and perhaps how.exit value)#Discuss systematic bias due to gaming metrics incentivized by rewards attached to the measurement *Measure 2 (old and newMOOMI ratio). Specifically, note that this behavioral change might have nothing to do with the underlying phenomenon of interest. *Measure 3 (Think "Rewarding A While Hoping For B", etc., multitask, etc.Pipeline)#Justify the choices in the reduction of the measurement space. Measurement is often reductive: What is left out? Is it important? Discuss the difference between the conceptual phenomenon and how it is operationalized.*Measure 4 (Raise Rate)#Consider possible downsides of each individual metric*Measure 5 (Repeat VC)#Consider possible downsides in using the entire battery of measures#Test alternatives to this measurement approach#Test the effects of using a framework on various outcomes.*Measure 6 (ESO Expertise)
Of theseThe old Startup Ranking measure has been dropped. Also, 1,2, and 3 there are two definitions that are reasonable suggestions and could be addressed. Point 4 is more problematic but probably possible in some sense. Point close to being measures:*Definition 5 could be interpreted as the downside of the framework as a whole (see belowLocal VC)*Definition 8 (Expert) and then could be possible. Points 6 and 7 might be excused by explaining that the editors and I have agreed that this won't be a testing paper. Nevertheless, the paper will show examples of not using the measurement framework through-out.
Also for reference, here are the measures from the paper:*Measure Measures 1 (Startup Ranking) *Measure and 2 (Apportioned investment and exit value)*Measure 3 (MOOMI ratio)*Measure 4 (Pipeline)*Measure 5 (Raise Rate)*Measure 6 (Repeat VC)*Measure 7 (ESO Expertise) Measure 1 is exclusive provide ways to all otherscalculate proxies for return quartiles. Measure 2 underpins Measures 3, but 2 & 3 stand-alone. Measures and 4 and 5 go hand in hand, and can be refined by 6. However, 7 exists a proxy for 5 and 6, because they can be hard to calculate. So, the reviewers request to "consider possible downsides in using are the entire battery of central measures" could only mean to consider the downside of the framework as a whole, because it isn't possible . Measures 5 and 6 are alternative ways to use "assess the entire battery performance of measures" together. In my letter to the reviewer, I can explain that this paper pipeline components (measure 6 is for a special issue and that the editors and I have agreed that this should not be and an empirical paper and that should not contain regressions or other tests. That might tamp down their vitriol somewhat. Whether I deliver back a "major revision" is in the eye of the beholder, and there's no need to draw attention to this demandpossible without VC investment data).
===RP Constraints===