首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   139篇
  免费   0篇
  国内免费   1篇
航空   43篇
航天技术   33篇
航天   64篇
  2022年   1篇
  2021年   1篇
  2019年   1篇
  2018年   5篇
  2014年   4篇
  2013年   7篇
  2012年   3篇
  2011年   18篇
  2010年   10篇
  2009年   6篇
  2008年   2篇
  2007年   7篇
  2006年   2篇
  2005年   7篇
  2004年   10篇
  2003年   11篇
  2002年   4篇
  2000年   4篇
  1999年   2篇
  1998年   4篇
  1997年   5篇
  1994年   2篇
  1992年   2篇
  1990年   1篇
  1989年   2篇
  1987年   4篇
  1985年   2篇
  1984年   1篇
  1982年   1篇
  1977年   1篇
  1974年   1篇
  1973年   1篇
  1970年   2篇
  1968年   3篇
  1967年   2篇
  1966年   1篇
排序方式: 共有140条查询结果,搜索用时 31 毫秒
101.
Exposure to heavy particles can affect the functioning of the central nervous system (CNS), particularly the dopaminergic system. In turn, the radiation-induced disruption of dopaminergic function affects a variety of behaviors that are dependent upon the integrity of this system, including motor behavior (upper body strength), amphetamine (dopamine)-mediated taste aversion learning, and operant conditioning (fixed-ratio bar pressing). Although the relationships between heavy particle irradiation and the effects of exposure depend, to some extent, upon the specific behavioral or neurochemical endpoint under consideration, a review of the available research leads to the hypothesis that the endpoints mediated by the CNS have certain characteristics in common. These include: (1) a threshold, below which there is no apparent effect; (2) the lack of a dose-response relationship, or an extremely steep dose-response curve, depending on the particular endpoint; and (3) the absence of recovery of function, such that the heavy particle-induced behavioral and neural changes are present when tested up to one year following exposure. The current report reviews the data relevant to the degree to which these characteristics are common to neurochemical and behavioral endpoints that are mediated by the effects of exposure to heavy particles on CNS activity.  相似文献   
102.
Failure of a single component on-board a spacecraft can compromise the integrity of the whole system and put its entire capability and value at risk. Part of this fragility is intrinsic to the current dominant design of space systems, which is mainly a single, large, monolithic system. The space industry has therefore recently proposed a new architectural concept termed fractionation, or co-located space-based network (SBN). By physically distributing functions in multiple orbiting modules wirelessly connected, this architecture allows the sharing of resources on-orbit (e.g., data processing, downlinks). It has been argued that SBNs could offer significant advantages over the traditional monolithic architecture as a result of the network structure and the separation of sources of risk in the spacecraft. Careful quantitative analyses are still required to identify the conditions under which SBNs can “outperform” monolithic spacecraft. In this work, we develop Markov models of module failures and replacement to quantitatively compare the lifecycle cost and utility of both architectures. We run Monte-Carlo simulations of the models, and discuss important trends and invariants. We then investigate the impact of our model parameters on the existence of regions in the design space in which SBNs “outperform” the monolith spacecraft on a cost, utility, and utility per unit cost basis. Beyond the life of one single spacecraft, this paper compares the cost and utility implications of maintaining each architecture type through successive replacements.  相似文献   
103.
Choosing the “right” satellite platform for a given market and mission requirements is a major investment decision for a satellite operator. With a variety of platforms available on the market from different manufacturers, and multiple offerings from the same manufacturer, the down-selection process can be quite involved. In addition, because data for on-obit failures and anomalies per platform is unavailable, incomplete, or fragmented, it is difficult to compare options and make an informed choice with respect to the critical attribute of field reliability of different platforms. In this work, we first survey a large number of geosynchronous satellite platforms by the major satellite manufacturers, and we provide a brief overview of their technical characteristics, timeline of introduction, and number of units launched. We then analyze an extensive database of satellite failures and anomalies, and develop for each platform a “health scorecard” that includes all the minor and major anomalies, and complete failures—that is failure events of different severities—observed on-orbit for each platform. We identify the subsystems that drive these failure events and how much each subsystem contributes to these events for each platform. In addition, we provide the percentage of units in each platform which have experienced failure events, and, after calculating the total number of years logged on-orbit by each platform, we compute its corresponding average failure and anomaly rate. We conclude this work with a preliminary comparative analysis of the health scorecards of different platforms.The concept of a “health scorecard” here introduced provides a useful snapshot of the failure and anomaly track record of a spacecraft platform on orbit. As such, it constitutes a useful and transparent benchmark that can be used by satellite operators to inform their acquisition choices (“inform” not “base” as other considerations are factored in when comparing different spacecraft platforms), and by satellite manufacturers to guide their testing and reliability improvement programs. Finally, it is important to keep in mind that these health scorecards should be considered dynamic documents to be updated on a regular basis if they are to remain accurate and relevant for comparative analysis purposes, as new information will impact their content.  相似文献   
104.
Type III solar radio bursts have been observed from 10 MHz to 10 kHz by satellite experiments above the terrestrial plasmasphere. Solar radio emission in this frequency range results from excitation of the interplanetary plasma by energetic particles propagating outward along open field lines over distances from 5 R to at least 1 AU from the Sun. This review summarizes the morphology, characteristics and analysis of individual as well as storms of bursts. Substantial evidence is available to show that the radio emission is observed at the second harmonic instead of the fundamental of the plasma frequency. This brings the density scale derived by radio observations into better agreement with direct solar wind density measurements at 1 AU and relaxes the requirement for type III propagation along large density-enhanced regions. This density scale with the measured direction of arrival of the radio burst allows the trajectory of the exciter path to be determined from 10 R to 1 AU. Thus, for example, the dynamics and gross structure of the interplanetary magnetic field can be investigated by this method. Burst rise times are interpreted in terms of exciter length and dispersion while decay times refer to the radiation damping process. The combination of radio observations at the lower frequencies and in-situ measurements on non-relativistic electrons at 1 AU provide data on the energy range and efficiency of the wave-particle interactions responsible for the radio emission.  相似文献   
105.
Changed political objectives, straitened economic circumstances and an altered balance of capability and expertise in space endeavours have together produced more discussion of the need for international cooperation than ever before, but the meaning of the term has subtly shifted. Insisting on US ‘leadership’ is self-defeating; what is instead firmly desired by the USA's potential partners is a balanced relationship in which each nation makes an identifiable contribution and takes the lead in at least one of a package of projects. If this is going to happen, there must be an international forum or framework in which projects can be discussed and agreed. This in turn requires initiative at governmental level.  相似文献   
106.
SummaryA. Spectral features The ability of the various theories to explain the three main spectral features at 1/4 keV, 60 keV and 1 MeV is summarized in Tables II and III.Clearly, confirmation of the reality of these features, especially the soft X-ray and -ray excesses, is one of the key elements in enabling us to decide between the competing theoretical interpretations.B. Energy requirements None of the proposed interpretations are easily explained in terms of the available energy in cosmic rays (except perhaps the Seyfert galaxy proposal, and this runs into difficulties). It seems that one either has to regard normal galaxies at the present epoch as prolific sources of cosmic rays ( 1060 erg/galaxy in protons), as is required by the Brecher-Morrison model, or to argue that at early stages in their evolution far more energy is available than at present. One ends up with much the same energy requirement in this approach.One could conceivably identify such an early phase with the radio galaxy or QSO phenomena: in any event, cosmological evolution plays a major role. Cosmology does ease the energy requirements, but only for the inefficient mechanisms, such as nonthermal bremsstrahlung or ° -production.It seems that one still needs the metagalactic cosmic ray flux to be 10-2 of the galactic flux in the diffuse inverse Compton models, and 10-2–10-4 in the nonthermal bremsstrahlung models.Faced with problems of energetics, one is tempted to turn to the most energetic objects in the Universe, namely Seyfert nuclei and QSO's, to provide the basic energy source, whether directly or indirectly, for the diffuse X-ray background. A direct connection could be more readily investigated when X-ray observations are available of more extra-galactic sources.C. Angular variations Another approach, complementary to that of looking for remote discrete sources, is to seek angular fluctuations, or limits on such fluctuations in the diffuse X-ray background.The best results presently available are those from the X-ray experiment on board OSO 3. Schwartz (1970) reports a limit of I/Ifour percent on small-scale (10°) fluctuations over 10–100 keV over about one-quarter of the sky. If one assumes a astrophysics, namely the origin of cosmic rays, is intimately linked to the origin of the X-ray background.It may well be that no single mechanism suffices to account for the entire spectrum of isotropic X- and -radiation. Nature is sufficiently perverse for there to be a reasonable probability that several different processes are contributing, and considerable ingenuity will be required to ascertain which mechanism, if any, is assigned the dominant role in a given spectral region.This review is based on an invited paper presented at the joint meeting of the A. A. S. Division of High Energy Astrophysics, and the A. P. S. Division of Cosmic Physics, Washington, D. C., 28 April–1 May, 1970  相似文献   
107.
On long-duration missions to other planets astronauts will be exposed to types and doses of radiation that are not experienced in low earth orbit. Previous research using a ground-based model for exposure to cosmic rays has shown that exposure to heavy particles, such as 56Fe, disrupts spatial learning and memory measured using the Morris water maze. Maintaining rats on diets containing antioxidant phytochemicals for 2 weeks prior to irradiation ameliorated this deficit. The present experiments were designed to determine: (1) the generality of the particle-induced disruption of memory by examining the effects of exposure to 56Fe particles on object recognition memory; and (2) whether maintaining rats on these antioxidant diets for 2 weeks prior to irradiation would also ameliorate any potential deficit. The results showed that exposure to low doses of 56Fe particles does disrupt recognition memory and that maintaining rats on antioxidant diets containing blueberry and strawberry extract for only 2 weeks was effective in ameliorating the disruptive effects of irradiation. The results are discussed in terms of the mechanisms by which exposure to these particles may produce effects on neurocognitive performance.  相似文献   
108.
Clouds and Hazes of Venus   总被引:1,自引:0,他引:1  
More than three decades have passed since the publication of the last review of the Venus clouds and hazes. The paper published in 1983 in the Venus book summarized the discoveries and findings of the US Pioneer Venus and a series of Soviet Venera spacecraft (Esposito et al. in Venus, p. 484, 1983). Due to the emphasis on in-situ investigations from descent probes, those missions established the basic features of the Venus cloud system, its vertical structure, composition and microphysical properties. Since then, significant progress in understanding of the Venus clouds has been achieved due to exploitation of new observation techniques onboard Galileo and Messenger flyby spacecraft and Venus Express and Akatsuki orbiters. They included detailed investigation of the mesospheric hazes in solar and stellar occultation geometry applied in the broad spectral range from UV to thermal IR. Imaging spectroscopy in the near-IR transparency “windows” on the night side opened a new and very effective way of sounding the deep atmosphere. This technique together with near-simultaneous UV imaging enabled comprehensive study of the cloud morphology from the cloud top to its deep layers. Venus Express operated from April 2006 until December 2014 and provided a continuous data set characterizing Venus clouds and hazes over a time span of almost 14 Venus years thus enabling a detailed study of temporal and spatial variability. The polar orbit of Venus Express allowed complete latitudinal coverage. These studies are being complemented by JAXA Akatsuki orbiter that began observations in May 2016. This paper reviews the current status of our knowledge of the Venus cloud system focusing mainly on the results acquired after the Venera, Pioneer Venus and Vega missions.  相似文献   
109.
The forthcoming 10 cm range tracking accuracy capability holds much promise in connection with a number of Earth and ocean dynamics investigations. These include a set of earthquake-related studies of fault motions and the Earth's tidal, polar and rotational motions, as well as studies of the gravity field and the sea surface topography which should furnish basic information about mass and heat flow in the oceans. The state of the orbit analysis art is presently at about the 10 m level, or about two orders of magnitude away from the 10 cm range accuracy capability expected in the next couple of years or so. The realization of a 10 cm orbit analysis capability awaits the solution of four kinds of problems, namely, those involving orbit determination and the lack of sufficient knowledge of tracking system biases, the gravity field, and tracking station locations. The Geopause satellite system concept offers promising approaches in connection with all of these areas. A typical Geopause satellite orbit has a 14 hour period, a mean height of about 4.6 Earth radii, and is nearly circular, polar, and normal to the ecliptic. At this height only a relatively few gravity terms have uncertainties corresponding to orbital perturbations above the decimeter level. The orbit s, in this sense, at the geopotential boundary, i.e., the geopause. The few remaining environmental quantities which may be significant can be determined by means of orbit analyses and accelerometers. The Geopause satellite system also provides the tracking geometery and coverage needed for determining the orbit, the tracking system biases and the station locations. Studies indicate that the Geopause satellite, tracked with a 2 cm ranging system from nine NASA affiliated sites, can yield decimeter station location accuracies. Five or more fundamental stations well distributed in longitude can view Geopause over the North Pole. This means not only that redundant data are available for determining tracking system biases, but also that both components of the polar motion can be observed frequently. When tracking Geopause, the NASA sites become a two-hemisphere configuration which is ideal for a number of Earth physics applications such as the observation of the polar motion with a time resolution of a fraction of a day. Geopause also provides the basic capability for satellite-to-satellite tracking of drag-free satellites for mapping the gravity field and altimeter satellites for surveying the sea surface topography. Geopause tracking a coplanar, drag-free satellite for two months to 0.03 mm per second accuracy can yield the geoid over the entire Earth to decimeter accuracy with 2.5° spatial resolution. Two Geopause satellites tracking a coplanar altimeter satellite can then yield ocean surface heights above the geoid with 7° spatial resolution every two weeks. These data will furnish basic boundary condition information about mass and heat flows in the oceans which are important in shaping weather and climate.  相似文献   
110.
If one accepts the overriding economic and social importance of telecommunications, then the importance of major shifts in international satellite policy becomes clear. In particular, proposals within the USA to redefine its relationship to the INTELSAT global satellite system take on broader significance than might be initially assumed. To understand why this is so, one must first focus on the role of INTELSAT and how it has changed the world of global telecommunications, at the national, regional and international levels.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号