scispace - formally typeset
Search or ask a question

Showing papers by "Indian Institute of Technology Bombay published in 2001"


Journal ArticleDOI
TL;DR: In this article, three analytical studies of base-isolated structures are carried out, and the effect of isolation damping on the performance of different isolation systems under near-fault motion is investigated.
Abstract: Three analytical studies of base-isolated structures are carried out. First, six pairs of near-fault motions oriented in directions parallel and normal to the fault were considered, and the average of the response spectra of these earthquake records was obtained. This study shows that in addition to pulse-type displacements, these motions contain significant energy at high frequencies and that the real and pseudo-velocity spectra are quite different. The second analysis modelled the response of a model of an isolated structure with a flexible superstructure to study the effect of isolation damping on the performance of different isolation systems under near-fault motion. The results show that there exists a value of isolation system damping for which the superstructure acceleration for a given structural system attains a minimum value under near-fault motion. Therefore, although increasing the bearing damping beyond a certain value may decrease the bearing displacement, it may transmit higher accelerations into the superstructure. Finally, the behaviour of four isolation systems subjected to the normal component of each of the near-fault motions were studied, showing that EDF type isolation systems may be the optimum choice for the design of isolated structures in near-fault locations. Copyright © 2001 John Wiley & Sons, Ltd.

396 citations


Proceedings ArticleDOI
02 Apr 2001
TL;DR: A notion of convertible constraints is developed and systematically analyzed, classify, and characterize this class and techniques which enable them to be readily pushed deep inside the recently developed FP-growth algorithm for frequent itemset mining are developed.
Abstract: Recent work has highlighted the importance of the constraint based mining paradigm in the context of frequent itemsets, associations, correlations, sequential patterns, and many other interesting patterns in large databases. The authors study constraints which cannot be handled with existing theory and techniques. For example, avg(S) /spl theta/ /spl nu/, median(S) /spl theta/ /spl nu/, sum(S) /spl theta/ /spl nu/ (S can contain items of arbitrary values) (/spl theta//spl isin/{/spl ges/, /spl les/}), are customarily regarded as "tough" constraints in that they cannot be pushed inside an algorithm such as a priori. We develop a notion of convertible constraints and systematically analyze, classify, and characterize this class. We also develop techniques which enable them to be readily pushed deep inside the recently developed FP-growth algorithm for frequent itemset mining. Results from our detailed experiments show the effectiveness of the techniques developed.

372 citations


Journal ArticleDOI
TL;DR: In this paper, an analytical formulation and solutions to the natural frequency analysis of simply supported composite and sandwich plates based on a higher-order refined theory developed by the first author and already reported in the literature are presented.

352 citations


Journal ArticleDOI
TL;DR: A review of displacement and stress based refined theories for isotropic and anisotropic laminated plates is presented in this paper, together with their merits and demerits, where exact elasticity solutions for the plate problems are cited.
Abstract: A review of displacement and stress based refined theories for isotropic and anisotropic laminated plates is presented. Various equivalent single layer and layerwise theories for laminated plates are discussed together with their merits and demerits. Exact elasticity solutions for the plate problems are cited, wherever available. Various critical issues related to plate theories are presented, based on the literature reviewed.

342 citations


Proceedings ArticleDOI
01 Apr 2001
TL;DR: This paper shows how to combine push and pull-based techniques to achieve the best features of both approaches and demonstrates that such adaptive data dissemination is essential to meet diverse temporal coherency requirements, to be resilient to failures, and for the efficient and scalable utilization of server and network resources.
Abstract: ÐAn important issue in the dissemination of time-varying web data such as sports scores and stock prices is the maintenance of temporal coherency. In the case of servers adhering to the HTTP protocol, clients need to frequently pull the data based on the dynamics of the data and a user's coherency requirements. In contrast, servers that possess push capability maintain state information pertaining to clients and push only those changes that are of interest to a user. These two canonical techniques have complementary properties with respect to the level of temporal coherency maintained, communication overheads, state space overheads, and loss of coherency due to (server) failures. In this paper, we show how to combine push and pull-based techniques to achieve the best features of both approaches. Our combined technique tailors the dissemination of data from servers to clients based on 1) the capabilities and load at servers and proxies and 2) clients' coherency requirements. Our experimental results demonstrate that such adaptive data dissemination is essential to meet diverse temporal coherency requirements, to be resilient to failures, and for the efficient and scalable utilization of server and network resources. Index TermsÐDynamic data, temporal coherency, scalability, resiliency, world wide web, data dissemination, push, pull.

261 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: A fast algorithm is presented, CDM, that identifies and eliminates local redundancies due to ICs, based on propagating “information labels” up the tree pattern, and shows the surprising result that the algorithm obtained by first augmenting the tree patterns using ICS, and then applying CIM, always finds the unique minimal equivalent query.
Abstract: Tree patterns forms a natural basis to query tree-structured data such as XML and LDAP. Since the efficiency of tree pattern matching against a tree-structured database depends on the size of the pattern, it is essential to identify and eliminate redundant nodes in the pattern and do so as quickly as possible. In this paper, we study tree pattern minimization both in the absence and in the presence of integrity constraints (ICs) on the underlying tree-structured database.When no ICs are considered, we call the process of minimizing a tree pattern, constraint-independent minimization. We develop a polynomial time algorithm called CIM for this purpose. CIM's efficiency stems from two key properties: (i) a node cannot be redundant unless its children are, and (ii) the order of elimination of redundant nodes is immaterial. When ICs are considered for minimization, we refer to it as constraint-dependent minimization. For tree-structured databases, required child/descendant and type co-occurrence ICs are very natural. Under such ICs, we show that the minimal equivalent query is unique. We show the surprising result that the algorithm obtained by first augmenting the tree pattern using ICS, and then applying CIM, always finds the unique minimal equivalent query; we refer to this algorithm as ACIM. While ACIM is also polynomial time, it can be expensive in practice because of its inherent non-locality. We then present a fast algorithm, CDM, that identifies and eliminates local redundancies due to ICs, based on propagating “information labels” up the tree pattern. CDM can be applied prior to ACIM for improving the minimization efficiency. We complement our analytical results with an experimental study that shows the effectiveness of our tree pattern minimization techniques.

257 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: A tool DATAMOLD is described that learns to automatically extract structure when seeded with a small number of training examples and enhances on Hidden Markov Models (HMM) to build a powerful probabilistic model that corroborates multiple sources of information.
Abstract: In this paper we present a method for automatically segmenting unformatted text records into structured elements. Several useful data sources today are human-generated as continuous text whereas convenient usage requires the data to be organized as structured records. A prime motivation is the warehouse address cleaning problem of transforming dirty addresses stored in large corporate databases as a single text field into subfields like “City” and “Street”. Existing tools rely on hand-tuned, domain-specific rule-based systems.We describe a tool DATAMOLD that learns to automatically extract structure when seeded with a small number of training examples. The tool enhances on Hidden Markov Models (HMM) to build a powerful probabilistic model that corroborates multiple sources of information including, the sequence of elements, their length distribution, distinguishing words from the vocabulary and an optional external data dictionary. Experiments on real-life datasets yielded accuracy of 90% on Asian addresses and 99% on US addresses. In contrast, existing information extraction methods based on rule-learning techniques yielded considerably lower accuracy.

255 citations


Journal ArticleDOI
TL;DR: A simple 3-layered feed forward type of network is developed and shows that an appropriately trained network could provide satisfactory results in open wider areas, in deep water and also when the sampling and prediction interval is large, such as a week.

249 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: This paper shows how to find an efficient plan for the maintenance of a set of materialized views, by exploiting common subexpressions between different view maintenance expressions, and develops a framework that cleanly integrates the various choices in a systematic and efficient manner.
Abstract: Materialized views have been found to be very effective at speeding up queries, and are increasingly being supported by commercial databases and data warehouse systems. However, whereas the amount of data entering a warehouse and the number of materialized views are rapidly increasing, the time window available for maintaining materialized views is shrinking. These trends necessitate efficient techniques for the maintenance of materialized views.In this paper, we show how to find an efficient plan for the maintenance of a set of materialized views, by exploiting common subexpressions between different view maintenance expressions. In particular, we show how to efficiently select (a) expressions and indices that can be effectively shared, by transient materialization; (b) additional expressions and indices for permanent materialization; and (c) the best maintenance plan — incremental or recomputation — for each view. These three decisions are highly interdependent, and the choice of one affects the choice of the others. We develop a framework that cleanly integrates the various choices in a systematic and efficient manner. Our evaluations show that many-fold improvement in view maintenance time can be achieved using our techniques. Our algorithms can also be used to efficiently select materialized views to speed up workloads containing queries and updates.

213 citations


Journal ArticleDOI
TL;DR: For the first time, HRP is shown to be effective in degrading and precipitating industrially important azo dyes and this study opens up a new area on exploration of commercial dyes as inhibitors of enzymes.
Abstract: Horseradish peroxidase (HRP) is known to degrade certain recalcitrant organic compounds such as phenol and substituted phenols. Here, for the first time we have shown HRP to be effective in degrading and precipitating industrially important azo dyes. For Remazol blue, the enzyme activity was found to be far better at pH 2.5 than at neutral pH. In addition, Remazol blue acts as a strong competitive inhibitor of HRP at neutral pH. Horseradish peroxidase shows broad substrate specificity toward a variety of azo dyes. Kinetic constants (K(m)(app) and V(max)(app)) for two different dyes have been determined. In addition to providing a systematic analysis of the potential of HRP in degradation of dyes, this study opens up a new area on exploration of commercial dyes as inhibitors of enzymes. 2001 John Wiley & Sons, Inc.

211 citations


Proceedings ArticleDOI
01 Apr 2001
TL;DR: A uniform fine-grained model for the Web in which pages are represented by their tag trees (also called their Document Object Models or DOMs) and these DOM trees are interconnected by ordinary hyperlinks is proposed.
Abstract: Topic distillation is the process of finding authoritative Web pages and comprehensive “hubs” which reciprocally endorse each other and are relevant to a given query. Hyperlinkbased topic distillation has been traditionally applied to a macroscopic Web model where documents are nodes in a directed graph and hyperlinks are edges. Macroscopic models miss valuable clues such as banners, navigation panels, and template-based inclusions, which are embedded in HTML pages using markup tags. Consequently, results of macroscopic distillation algorithms have been deteriorating in quality as Web pages are becoming more complex. We propose a uniform fine-grained model for the Web in which pages are represented by their tag trees (also called their Document Object Models or DOMs) and these DOM trees are interconnected by ordinary hyperlinks. Surprisingly, macroscopic distillation algorithms do not work in the finegrained scenario. We present a new algorithm suitable for the fine-grained model. It can dis-aggregate hubs into coherent regions by segmenting their DOMtrees. M utual endorsement between hubs and authorities involve these regions, rather than single nodes representing complete hubs. Anecdotes and measurements using a 28-query, 366000-document benchmark suite, used in earlier topic distillation research, reveal two benefits from the new algorithm: distillation quality improves, and a by-product of distillation is the ability to extract relevant snippets from hubs which are only partially relevant to the query.

Journal ArticleDOI
TL;DR: Compared to the traditional stove, the improved stoves resulted in the lower pollutant emissions on a kW h-1 basis from wood combustion but in similar emissions from briquette and dung cake, implying that thermal efficiency enhancement in the improvedStoves was mainly from design features leading to increased heat transfer but not combustion efficiency.
Abstract: This study reports emission factors of carbon monoxide and size-resolved aerosols from combustion of wood, dung cake, and biofuel briquette in traditional and improved stoves in India. Wood was the cleanest burning fuel, with higher emissions of CO from dung cake and particulate matter from both dung cake and briquette fuels. Combustion of dung cake, especially in an improved metal stove, resulted in extremely high pollutant emissions. Instead, biogas from anaerobic dung digestion should be promoted as a cooking fuel for public health protection. Pollutant emissions increased with increasing stove thermal efficiency, implying that thermal efficiency enhancement in the improved stoves was mainly from design features leading to increased heat transfer but not combustion efficiency. Compared to the traditional stove, the improved stoves resulted in the lower pollutant emissions on a kW h-1 basis from wood combustion but in similar emissions from briquette and dung cake. Stove designs are needed with good emi...

Journal ArticleDOI
TL;DR: The Vindhyan supergroup of central India as mentioned in this paper was divided into two sequences, the rift stage and the sag stage, and a marked change in sedimentation pattern was coupled with a transient plate-margin compression in the otherwise extensional regime.

Proceedings ArticleDOI
03 Dec 2001
TL;DR: This paper identifies the potential violations of control assumptions inherent in standard real-time scheduling approaches that causes, degradation in control performance and may even lead to instability, and develops practical approaches founded on control theory to deal with these violations.
Abstract: In this paper, we first identify the potential violations of control assumptions inherent in standard real-time scheduling approaches (because of the presence of jitters) that causes, degradation in control performance and may even lead to instability. We then develop practical approaches founded on control theory to deal with these violations. Our approach is based on the notion of compensations wherein controller parameters are adjusted at runtime for the presence of jitters. Through time and memory overhead analysis, and by elaborating on the implementation details, we characterize when offline and on-line compensations are feasible. Our experimental results confirm that our approach does compensate for the degraded control performance when EDF and FPS algorithms are used for scheduling the control tasks. Our compensation approach provides us another advantage that leads to better schedulability of control tasks. This derives from the potential to derive more flexible timing constraints, beyond periods and deadlines necessary to apply EDF and FPS. Overall, our approach provides guarantees offline that the control system will be stable at runtime-if temporal requirements are met at runtime-even when actual execution patterns are not known beforehand. With our approach, we can address the problems due to (a) sampling jitters, (b) varying delays between sampling and actuation, or (c) both-not addressable using traditional EDF and FPS based scheduling, or by previous real-time and control integration approaches.

Journal ArticleDOI
TL;DR: In this article, the problem of determining the location of a crack in a beam of varying depth when the lowest three natural frequencies of the cracked beam are known is solved using the finite element approach.

Proceedings ArticleDOI
01 Sep 2001
TL;DR: An enhanced topic distillation algorithm is presented which analyzes text, the markup tag trees that constitute HTML pages, and hyperlinks between pages and identifies subtrees which have high text- and hyperlink-based coherence w.r.t. the query.
Abstract: Topic distillation is the analysis of hyperlink graph structure to identify mutually reinforcing authorities (popular pages) and hubs (comprehensive lists of links to authorities). Topic distillation is becoming common in Web search engines, but the best-known algorithms model the Web graph at a coarse grain, with whole pages as single nodes. Such models may lose vital details in the markup tag structure of the pages, and thus lead to a tightly linked irrelevant subgraph winning over a relatively sparse relevant subgraph, a phenomenon called topic drift or contamination. The problem gets especially severe in the face of increasingly complex pages with navigation panels and advertisement links. We present an enhanced topic distillation algorithm which analyzes text, the markup tag trees that constitute HTML pages, and hyperlinks between pages. It thereby identifies subtrees which have high text- and hyperlink-based coherence w.r.t. the query. These subtrees get preferential treatment in the mutual reinforcement process. Using over 50 queries, 28 from earlier topic distillation work, we analyzed over 700,000 pages and obtained quantitative and anecdotal evidence that the new algorithm reduces topic drift.

Journal ArticleDOI
TL;DR: In this paper, a small percentage of coal ash particles, present in the pulverized coal ash, consists of thin-walled hollow spheres or cenospheres, which float on the ash slurry when it is impounded in the ash ponds or lagoons.

Journal ArticleDOI
TL;DR: Biochemical and genetic experiments that support an alternative ‘protein–protein interaction model’ provided new insights into how Gal3p interacts with the Gal80p–Gal4p complex, alleviates the repression of Gal 80p and thus allows Gal4p to activate transcription.
Abstract: In the yeast Saccharomyces cerevisiae, the interplay between Gal3p, Gal80p and Gal4p determines the transcriptional status of the genes needed for galactose utilization. The interaction between Gal80p and Gal4p has been studied in great detail; however, our understanding of the mechanism of Gal3p in transducing the signal from galactose to Gal4p has only begun to emerge recently. Historically, Gal3p was believed to be an enzyme (catalytic model) that converts galactose to an inducer or co-inducer, which was thought to interact with GAL80p, the repressor of the system. However, recent genetic analyses indicate an alternative ‘protein–protein interaction model’. According to this model, Gal3p is activated by galactose, which leads to its interaction with Gal80p. Biochemical and genetic experiments that support this model provided new insights into how Gal3p interacts with the Gal80p–Gal4p complex, alleviates the repression of Gal80p and thus allows Gal4p to activate transcription. Recently, a galactose-independent signal was suggested to co-ordinate the induction of GAL genes with the energy status of the cell.

Journal ArticleDOI
TL;DR: In this article, it was shown that the new polynomial invariants for knots, upto nine crossings, agree with the Ooguri-Vafa conjecture relating Chern-Simons gauge theory to topological string theory on the resolution of the conifold.

Journal ArticleDOI
TL;DR: An experimental study of the flow of different materials in quasi-two-dimensional rotating cylinders is carried out using flow visualization, which shows that the shape of the scaled surface profiles and the scaled layer thickness profiles are nearly identical when Froude number and size ratio are held constant, for each material.
Abstract: An experimental study of the flow of different materials (steel balls, glass beads, and sand) in quasi-two-dimensional rotating cylinders is carried out using flow visualization. The flow in the rotating cylinder comprises of a thin-flowing surface layer with the remaining particles rotating as a fixed bed. Experimental results indicate that the scaled layer thickness increases with increasing Froude number $(\mathrm{Fr}={\ensuremath{\omega}}^{2}R/g,$ where $\ensuremath{\omega}$ is the angular speed, R is the cylinder radius, and g the acceleration due to gravity) and with increase in size ratio $(s=d/R,$ where d is the particle diameter). The free surface profile, is nearly flat at low $\mathrm{Fr}$ and becomes increasingly S shaped with increasing $\mathrm{Fr}.$ The layer thickness profiles, which are symmetric at low $\mathrm{Fr}$ become skewed at high values of $\mathrm{Fr}$ and small s. The dynamic angles of repose for all the materials studied show a near-linear increase with rotational speed $(\ensuremath{\omega}).$ Scaling analysis of the experimental data shows that the shape of the scaled surface profiles and the scaled layer thickness profiles are nearly identical when Froude number and size ratio are held constant, for each material. The surface profiles and layer thickness profiles are also found to be nearly independent of the material used. The dynamic angle of repose $(\ensuremath{\beta}),$ however, does not scale with $\mathrm{Fr}$ and s and depends on the particle properties. The experimental results are compared to continuum models for flow in the layer. The models of Elperin and Vikhansky [Europhys. Lett. $42,$ 619 (1998)] and Makse [Phys. Rev. Lett. $83,$ 3186 (1999)] show good agreement at low $\mathrm{Fr}$ while that of Khakhar et al. [Phys. Fluids, $9,$ 31 (1997)] gives good predictions over the entire range of parameters considered. An analysis of the data indicate that the velocity gradient $(\stackrel{\ifmmode \dot{}\else \.{}\fi{}}{\ensuremath{\gamma}})$ is nearly constant along the layer at low $\mathrm{Fr},$ and the value calculated at the layer midpoint varies as ${\stackrel{\ifmmode \dot{}\else \.{}\fi{}}{\ensuremath{\gamma}}}_{0}\ensuremath{\propto}[g\mathrm{sin}({\ensuremath{\beta}}_{0}\ensuremath{-}{\ensuremath{\beta}}_{s})/d\mathrm{cos}{\ensuremath{\beta}}_{s}{]}^{1/2}$ for all the experimental data, where ${\ensuremath{\beta}}_{s}$ is the static angle of repose and ${\ensuremath{\beta}}_{0}$ is the interface angle at the layer midpoint. An extension of ``heap'' models (BCRE, BRdG) is used to predict the interface angle profiles, which are in reasonable agreement with experimental measurements.

Journal ArticleDOI
TL;DR: In this article, the peak temperatures attained at different points during deposition of weld beads in stainless steel and low carbon steel weld pads were compared, and the residual stress patterns developed, the change in the peak tensile stress with the deposition of welding beads and the relation between peak temperatures and residual stresses in the weld pads are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors present analytical solutions for the free vibration analysis of laminated composite and sandwich plates based on two higher order re,ned theories already developed by the,rst author for which analytical formulations and solutions were not reported earlier in the literature.

Journal ArticleDOI
TL;DR: A lower bound on the uncertainty product of signal representations in two FrFT domains for real signals is obtained, and it is shown that a Gaussian signal achieves the lower bound.
Abstract: The fractional Fourier transform (FrFT) can be thought of as a generalization of the Fourier transform to rotate a signal representation by an arbitrary angle /spl alpha/ in the time-frequency plane. A lower bound on the uncertainty product of signal representations in two FrFT domains for real signals is obtained, and it is shown that a Gaussian signal achieves the lower bound. The effect of shifting and scaling the signal on the uncertainty relation is discussed. An example is given in which the uncertainty relation for a real signal is obtained, and it is shown that this relation matches with that given by the uncertainty relation derived.

Proceedings Article
11 Sep 2001
TL;DR: A new operator can automatically generalize from a specific problem case in detailed data and return the broadest context in which the problem occurs and a compact and easy-to-interpret summary of all possible maximal generalizations along various roll-up paths around the case is proposed.
Abstract: In this paper we propose a new operator for advanced exploration of large multidimensional databases. The proposed operator can automatically generalize from a specific problem case in detailed data and return the broadest context in which the problem occurs. Such a functionality would be useful to an analyst who after observing a problem case, say a drop in sales for a product in a store, would like to find the exact scope of the problem. With existing tools he would have to manually search around the problem tuple trying to draw a pattern. This process is both tedious and imprecise. Our proposed operator can automate these manual steps and return in a single step a compact and easy-to-interpret summary of all possible maximal generalizations along various roll-up paths around the case. We present a fle xible cost-based framework that can generalize various kinds of behaviour (not simply drops) while requiring little additional customization from the user. We design an algorithm that can work efficiently on large multidimensional hierarchical data cubes so as to be usable in an interactive setting.

Journal ArticleDOI
TL;DR: A dibenzyl substituted poly(3,4-propylenedioxythiophene) was designed and synthesized, and exhibited a contrast of 89% at 632nm with switching speeds of 400 milliseconds and coloration efficiency of 575 cm2 C−1.
Abstract: A dibenzyl substituted poly(3,4-propylenedioxythiophene) was designed and synthesized, and exhibited a contrast of 89% at 632 nm with switching speeds of 400 ms and coloration efficiency of 575 cm2 C−1.

Journal ArticleDOI
TL;DR: In this paper, a physically based model and formulation for industrial load management is presented, which utilizes an integer linear programming technique for minimizing the electricity costs by scheduling the loads satisfying the process, storage and production constraints.
Abstract: This paper presents a physically based model and formulation for industrial load management. The formulation utilizes an integer linear programming technique for minimizing the electricity costs by scheduling the loads satisfying the process, storage and production constraints. The proposed strategy is evaluated by a case study for a typical flour mill with different load management options. The results show that significant reductions in peak electricity consumption are possible under time of use tariffs.

Journal ArticleDOI
TL;DR: In this paper, the preparation and characterization of nanosized metal oxide particles (Fe2O3, ZnO and PbO) inside the mesopore channels of MCM-41 and MCM48 silicate molecular sieves was described.

Journal ArticleDOI
TL;DR: This paper proposes an automated qualitative shape analysis formalism for detecting and diagnosing different kinds of oscillations in control loops and extends the earlier QSA methodology to make it more robust by developing an algorithm for automatic identification of the appropriate global time-scales.

Proceedings ArticleDOI
01 May 2001
TL;DR: This work presents a general model for schedules with pipelining, and shows that finding a valid schedule with minimum cost is NP-hard, and presents a greedy heuristic for finding good schedules.
Abstract: Database systems frequently have to execute a set of related queries, which share several common subexpressions. Multi-query optimization exploits this, by finding evaluation plans that share common results. Current approaches to multi-query optimization assume that common subexpressions are materialized. Significant performance benefits can be had if common subexpressions are pipelined to their uses, without being materialized. However, plans with pipelining may not always be realizable with limited buffer space, as we show. We present a general model for schedules with pipelining, and present a necessary and sufficient condition for determining validity of a schedule under our model. We show that finding a valid schedule with minimum cost is NP-hard. We present a greedy heuristic for finding good schedules. Finally, we present a performance study that shows the benefit of our algorithms on batches of queries from the TPCD benchmark.

Journal ArticleDOI
TL;DR: This Account summarizes the recent developments in the hydrolysis chemistry of Group 13 trialkyl and triaryl compounds and discusses the role of water impurity in organometallic reactions involving a Group 13 alkyl and other ligands to build molecular clusters.
Abstract: This Account summarizes the recent developments in the hydrolysis chemistry of Group 13 trialkyl and triaryl compounds. Emphasis has been placed on the results obtained by us on (a) 1H NMR investigations of controlled hydrolyses of AlMes3 and GaMes3, (b) low-temperature isolation of water adducts of triaryl compounds of aluminum and gallium, (c) synthesis and structural characterization of new polyhedral alumoxanes and galloxanes, and (d) the search for an easy way to synthesize well-defined crystalline methylalumoxanes by deprotonation of the hydroxides with alkyllithium reagents. The systematic studies on the hydrolysis of tBu3Al carried out by Barron et al. are also discussed in order to elucidate the roles of (i) reaction temperature, (ii) solvent medium, and (iii) source of water molecules, in building up hitherto unknown alumoxane clusters. The role of water impurity in organometallic reactions involving a Group 13 alkyl and other ligands (such as silanetriols and phosphorus acids) to build molecular clusters has also been discussed.