The exact definition of basis functions used in functional data objects depends closely on the type of data or feature that functional data objects are looking to replicate. At their most basic level Fourier series are commonly used for periodic and near periodic data (such as for weather data and some economic data), whilst spline-based functions are used for non-periodic data \cite{Ramsay_2009}. Beyond these higher level distinctions, polynomial, B-spline (which are essentially built up of many polynomial sections), and wavelet functions can also be considered, with B-splines found to be better suited to fitting highly curvy data (where polynomials would require a large number of basis functions to achieve the same degree of fit - as such, splines have largely replaced polynomials now). Wavelets have been observed to be very good at capturing sharp edges, which is a particular weakness of Fourier based functions \cite{Ramsay_2009}. If using B-splines, it is necessary to first define the number of 'knots' that should be used in the representation of a curve (i.e. the joining points linking adjacent polynomial segments in the spline). Setting the number of knots equivalent to the total number of observations in a time series keeps this definition simple, although may again result in a large number of basis functions depending on the length of the series considered.
With regards to best implementation practices for functional data analysis, recommendations have been presented in the work of Ramsay that should be considered if looking to apply these techniques. Firstly, it is advised that the order of B-spline functions be at least four orders of magnitude larger than the highest order derivative to be considered in any analysis, in order to properly capture any significant influences from derivative behaviours \cite{Ramsay_2009}. Another important point raised in this literature is the need to scale time vectors appropriately as required so that the time period of each basis function is not significantly less than 1, otherwise rounding errors can become an issue when large number of basis functions are used \cite{Ramsay_2009}. It's also worth noting here that this study assumes that the resampling of time series based on simple linear interpolation, in order to ensure that a consistent number of observations is used across technologies being compared, will not introduce significant errors into the assessment of the predictive ability of different bibliometric indicator groups. In terms of compatibility with feature alignment techniques, the work of Ramsay provides well-documented evidence from the studies conducted previously of how feature alignment processes (also referred to as 'landmark registration') often form a prerequisite to model building using functional data approaches. As such, time series segmented and aligned based on features, such as aligning technologies against common Technology Life Cycle stages, have been shown to enable a single data object to be generated for multiple curves that originally spanned across time periods of different lengths \cite{Ramsay_2009}. Lastly, in applying functional data analysis techniques to other examples of growth curves (such as the U.S. Nondurable Goods Index), Ramsay advocates the use of data transformation and smoothing in order to be able to focus on long-term trends rather than periodic or seasonal patterns \cite{Functional_Data_Analysis_data_transformationa,Functional_Data_Analysis_data_smoothing}.