首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19519篇
  免费   406篇
  国内免费   479篇
航空   10931篇
航天技术   5642篇
综合类   585篇
航天   3246篇
  2021年   214篇
  2019年   153篇
  2018年   233篇
  2016年   199篇
  2014年   543篇
  2013年   634篇
  2012年   552篇
  2011年   704篇
  2010年   560篇
  2009年   906篇
  2008年   922篇
  2007年   467篇
  2006年   535篇
  2005年   437篇
  2004年   474篇
  2003年   553篇
  2002年   523篇
  2001年   634篇
  2000年   431篇
  1999年   551篇
  1998年   504篇
  1997年   378篇
  1996年   419篇
  1995年   491篇
  1994年   448篇
  1993年   401篇
  1992年   328篇
  1991年   276篇
  1990年   252篇
  1989年   398篇
  1988年   223篇
  1987年   249篇
  1986年   237篇
  1985年   638篇
  1984年   516篇
  1983年   399篇
  1982年   488篇
  1981年   609篇
  1980年   245篇
  1979年   182篇
  1978年   189篇
  1977年   144篇
  1976年   156篇
  1975年   183篇
  1974年   181篇
  1973年   161篇
  1972年   188篇
  1971年   148篇
  1970年   143篇
  1969年   147篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
691.
Light scattering in planetary atmospheres   总被引:45,自引:0,他引:45  
This paper reviews scattering theory required for analysis of light reflected by planetary atmospheres. Section 1 defines the radiative quantities which are observed. Section 2 demonstrates the dependence of single-scattered radiation on the physical properties of the scatterers. Section 3 describes several methods to compute the effects of multiple scattering on the reflected light.  相似文献   
692.
This paper deals with the application of modern estimation techniques to the problem of speech data rate reduction. It is desirable to adaptively identify and quantitize the parameters of the speech model. These paramaters cannot be identified and quantized exactly; the performance of the predictor is thereby degraded and this could prevent data reduction. In many cases it is desirable to emply a suboptimal predictor in order to simplify the algorithms, and predictor performance is again degraded. This paper develops sensitivity and error analysis as a potential method for determining quantitatively how speech data reduction system performance is degraded by imprecise parameter knowledge or suboptimal filtering. An intended use of the sensitivity and error analysis algorithms is to determine parameter identification and model structure requirements of configuration concepts for adaptive speech digitizers. First, sensitivity and error analysis algorithms are presented that form the basis for the remainder of the work. The algorithms are then used to determine how imprecise knowledge of vocal tract parameters degrades predictor performance in speech. Transversal filters have previously been proposed for this application. The sensitivity analysis algorithms are then used to determine when and by how much the transverse filter is suboptimal to the Kalman filter. In particular, the question of how effectively a higher order of all-pole model approximates a system with zeros is answered, as this question is of considerable importance in speech. Finally, the physical significance of the innovations process in speech data rate reduction is studied.  相似文献   
693.
In radars that achieve a high subclutter visibility by coherent processing over several pulses, a serious problem appears in the form of blind Dopplers, or ?speeds,? at which target detection is impossible. Of the possible methods of eliminating these blind speeds, the most basic one that is employed when the performance requirements are high involves the use of several PRF's. These PRF's are chosen so that coverage is obtained at any Doppler with at least one PRF. The problem faced by the radar designer is to select the set of PRF's and the pulse numbers for each PRF so that the search frame time is minimized. This paper evolves a systematic method for the design of the blind-speed elimination scheme. A formalized approach is offered that shows the possible combinations of wavelength, PRF, and pulse number and the tradeoffs involved, without introducing the confusion ordinarily associated with multiparameter choices.  相似文献   
694.
695.
696.
Recent examinations of extraterrestrial materials exposed to cosmic rays for different intervals of time during the geological history of the solar system have generated a wealth of new information on the history of cosmic radiation. This information relates to the temporal variations in
  1. the flux and energy spectrum of low energy (solar) protons of ? 10 MeV kinetic energy;
  2. the flux and energy spectrum of (solar) heavy nuclei of Z > 20 of kinetic energy, 0.5–10 MeV/n;
  3. the integrated flux of protons and heavier nuclei of ? 0.5 GeV kinetic energy, and
  4. the flux and energy spectrum of nuclei of Z > 20 of medium energy — 100–2000 MeV/n kinetic energy.
The above studies are entirely based on the natural detector method which utilises two principal cosmogenic effects observed in rocks, (i) isotopic changes and (ii) changes in the crystalline structure of rock constituents, due to cosmogenic interactions. The information available to date in the field of hard rock cosmic ray archaeology refers to meteorites and lunar rocks/soil. Additional information based on study of cosmogenic effects in man-made materials exposed to cosmic radiation in space is also discussed. It is shown that the natural detectors inspite of their extreme simplicity have begun to provide cosmic ray information in a very quantitative and precise manner comparable to the most sophisticated electronic particle detectors. The single handicap in using the hard rock detectors is however the uncertainty regarding their manner of exposure, geometry etc. At present, a variety of techniques are being used to study the evolutionary history of extraterrestrial materials and as this field grows, uncertainties in cosmic ray archaeology will correspondingly decrease.  相似文献   
697.
The modulation of galactic cosmic rays in the heliosphere seems to be dominated by four major mechanisms: convection, diffusion, drifts (gradient, curvature and current sheet), and adiabatic energy losses. In this regard the global structure of the solar wind, the heliospheric magnetic field (HMF), the current sheet (HCS), and that of the heliosphere itself play major roles. Individually, the four mechanisms are well understood, but in combination, the complexity increases significantly especially their evolvement with time - as a function of solar activity. The Ulysses observations contributed significantly during the past solar minimum modulation period to establish the relative importance of these major mechanisms, leading to renewed interest in developing more sophisticated numerical models, and in the underlying physics, e.g., what determines the diffusion tensor. With increased solar activity, the relative contributions of the mentioned mechanisms change, but how they change and what causes these changes over an 11-year solar cycle is not well understood. It can therefore be expected that present and forthcoming observations during solar maximum activity will again produce very important insights into the causes of long-term modulation. In this paper the basic theory of solar modulation is reviewed for galactic cosmic rays. The influence of the Ulysses observations on the development of the basic theory and numerical models are discussed, especially those that have challenged the theory and models. Model-based predictions are shown for what might be encountered during the next solar minimum. Lastly, modulation theory and modelling are discussed for periods of maximum solar activity when a global reorganization of the HMF, and the HCS, occurs. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   
698.
An Overview of the Fast Auroral SnapshoT (FAST) Satellite   总被引:3,自引:0,他引:3  
Pfaff  R.  Carlson  C.  Watzin  J.  Everett  D.  Gruner  T. 《Space Science Reviews》2001,98(1-2):1-32
The FAST satellite is a highly sophisticated scientific satellite designed to carry out in situ measurements of acceleration physics and related plasma processes associated with the Earth's aurora. Initiated and conceptualized by scientists at the University of California at Berkeley, this satellite is the second of NASA's Small Explorer Satellite program designed to carry out small, highly focused, scientific investigations. FAST was launched on August 21, 1996 into a high inclination (83°) elliptical orbit with apogee and perigee altitudes of 4175 km and 350 km, respectively. The spacecraft design was tailored to take high-resolution data samples (or `snapshots') only while it crosses the auroral zones, which are latitudinally narrow sectors that encircle the polar regions of the Earth. The scientific instruments include energetic electron and ion electrostatic analyzers, an energetic ion instrument that distinguishes ion mass, and vector DC and wave electric and magnetic field instruments. A state-of-the-art flight computer (or instrument data processing unit) includes programmable processors that trigger the burst data collection when interesting physical phenomena are encountered and stores these data in a 1 Gbit solid-state memory for telemetry to the Earth at later times. The spacecraft incorporates a light, efficient, and highly innovative design, which blends proven sub-system concepts with the overall scientific instrument and mission requirements. The result is a new breed of space physics mission that gathers unprecedented fields and particles observations that are continuous and uninterrupted by spin effects. In this and other ways, the FAST mission represents a dramatic advance over previous auroral satellites. This paper describes the overall FAST mission, including a discussion of the spacecraft design parameters and philosophy, the FAST orbit, instrument and data acquisition systems, and mission operations.  相似文献   
699.
The problem with aviation COTS   总被引:1,自引:0,他引:1  
Commercial Off the Shelf (COTS) has become a byword for acquisition reform, but there are significant risks associated with the use of COTS products in military systems. These risks are especially acute for aviation systems. This paper explains how COTS can negatively affect military acquisitions and gives ideas on how to plan and resolve COTS caused problems  相似文献   
700.
Frequency measurements made at a moving platform can be used to locate an emitter. An error ellipsoid analysis is used to compare the performance under three levels of a priori information on the emitter's altitude: (1) no knowledge, (2) terrain data, and (3) complete knowledge of the emitter's altitude. The analysis is performed for two simple platform paths that provide frequency measurements that are approximately time reversed versions of one another. When no a priori knowledge is available there is little difference between the performance when the platform maneuvers on a concave circular path or on a convex circular path and the performance depends very Little on the platform altitude. However, when some a priori altitude information is available the performance is markedly different on the two paths and is highly dependent on the platform altitude. Thus, this analysis provides the unexpected result that for seemingly similar platform paths, the performance can vary markedly when the emitter altitude is assumed known. Also, an interesting result is that for some cases it is possible to achieve better x-y accuracy when using terrain data than when the emitter's z location is known, because the terrain data provides terrain slope information. These cases are characterized in terms of the terrain slope at the emitter  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号